AUTHOR=Sbai Zohra TITLE=Model checking deep neural networks: opportunities and challenges JOURNAL=Frontiers in Computer Science VOLUME=Volume 7 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1557977 DOI=10.3389/fcomp.2025.1557977 ISSN=2624-9898 ABSTRACT=Deep neural networks (DNNs) are extensively used in both current and future manufacturing, transportation, and healthcare sectors. The widespread use of neural networks in highly safety-critical applications has made it necessary to prevent catastrophic issues from arising during prediction processes. In fact, misreading a traffic sign by an autonomous car or performing an incorrect analysis of medical records could put human lives in danger. With this awareness, the number of studies related to deep neural network verification has increased dramatically in recent years. In particular, formal guarantees regarding the behavior of a DNN under particular settings are provided by model checking, which is crucial in safety-critical applications where network output errors could have disastrous effects. Model checking is an effective approach for confirming that neural networks perform as planned by comparing them to clearly stated qualities. This paper aims to highlight the critical need for and present challenges associated with using model-checking verification techniques to verify deep neural networks before relying on them in real-world applications. It examines state-of-the-art research and draws the most prominent future directions in the model checking of neural networks.