Verifying Autonomy: Formal Methods for Reliable Decision-Making

  • 1,019

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 16 March 2026

  2. This Research Topic is currently accepting articles.

Background

The intersection of formal methods and autonomous systems presents an exciting landscape for advancing the safety and reliability of Artificial Intelligence (AI) agents. While AI has achieved significant milestones in autonomy, the assurance of safety and correctness remains a vital area of concern. There exists a pressing need to merge the rigorous reasoning capabilities of formal logic with the adaptive decision-making processes of autonomous systems. Recent scholarly work has underscored the potential of integrating reinforcement learning with formal methods, suggesting a pathway to achieve robust frameworks that can plan, learn, and adapt within uncertain environments. This convergence offers promise for richer, more reliable models of decision-making, thereby establishing a common foundation to foster coordinated behavior and ensure safety in multi-agent settings.

This Research Topic aims to bridge the gap between AI's adaptive capabilities and the safety assurances provided by formal methods. The primary objectives include enhancing formal method techniques with learning-based decision-making processes, developing shared languages to effectively capture knowledge, uncertainty, and strategy, and constructing open tools and benchmarks that showcase autonomous agents' safe operation in real-world settings. Through uniting the expertise from AI and formal methods, this endeavor seeks to advance the development of intelligent agents that not only plan and learn but also provide formal guarantees of their decisions.

To gather further insights into the integration of formal methods and autonomous systems, we welcome articles addressing, but not limited to, the following themes:

- Logic-based representation, action theories, temporal logics, logics of programs
- AI Planning
- Automated Reasoning
- Formal Verification and Synthesis
- Knowledge Representation
- Reasoning about Actions
- Multi-Agent Systems
- Reinforcement Learning
- Neuro-symbolic
- Stochastic representation, MDPs and Non Markovian Decision Processes
- Benchmarks, toolchains, and datasets for reproducible evaluation
- Application studies in robotics, cyber-physical systems, or reliable AI services

We invite both survey articles that map the existing landscape and technical contributions presenting innovative methods, analyses, or case studies.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Curriculum, Instruction, and Pedagogy
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary
  • Hypothesis and Theory
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Knowledge Representation, Formal Methods, Multi-agent Systems, Reinforcement Learning, Automata

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,019Topic views
  • 196Article views
View impact