Modern AI systems achieve remarkable performance in perception, decision-making, and prediction. However, these successes often come at the cost of interpretability and energy efficiency. In contrast, biological neural systems demonstrate robust generalization, adaptive learning, and efficiency—achievements that AI research has yet to match. A key limitation in AI evaluation is its reliance on benchmarking, which often fails to capture the nuanced, dynamic behaviors that emerge during learning and inference. In psychology and neuroscience, cognitive and behavioral tests provide deep insights into the processes and mechanisms underlying decision-making, adaptability, and reasoning in humans and animals. These methodologies are now becoming relevant for AI, offering new ways to objectively assess model behaviors beyond simple accuracy metrics. Furthermore, at the core of deep learning models lies linear algebra—particularly matrix transformations that evolve during training. Understanding these can reveal structural constraints that improve learning efficiency, reduce computational cost, and expose emergent mechanisms within AI systems.
This collection aims to advance AI toward systems that are scientifically grounded, efficient, and transparent. Despite their impressive capabilities, most AI models function as black boxes, with little understanding of their internal mechanisms. This is because of the field’s emphasis on evaluation by benchmarking, which prioritizes performance accuracy over a deeper understanding of model behavior. This approach neglects key questions: How do neural networks develop internal representations? How can their operations be simplified to achieve more efficient learning and inference? Could more efficient AI systems generalize from limited data? This Research Topic aims to foster a more rigorous approach to AI development, integrating tools from computational neuroscience, psychology, and applied mathematics. By leveraging mechanistic interpretability, mathematical analysis of model dynamics, cognitive and behavioral testing, and structural constraints such as low-rank matrix representations, we seek to build AI systems that are more explainable, adaptable, and computationally efficient. We are also interested in how these methodologies translate into real-world applications, encouraging studies that explore AI cognition and behavior across diverse domains, from robotics and healthcare to signal processing, finance, scientific discovery, and creative AI. Contributions may develop biologically plausible models that balance interpretability with performance, propose novel evaluation frameworks inspired by cognitive science and neuroscience, or explore mathematical techniques for optimizing deep neural network structure, learning, and inference.
We encourage submissions that integrate insights from computational neuroscience, cognitive science, psychology, and applied mathematics to develop more rigorous methodologies for evaluating and understanding machine learning model behavior. Studies that examine how these methodologies apply across different domains, ensuring AI systems are interpretable, efficient, and effective in real-world settings, are particularly welcome.
Key themes include:
- Mechanistic Interpretability - Mathematical and Computational Analysis - Ecologically Valid AI Evaluation - Biologically Inspired Architectures - Energy-Efficient AI - Cross-Domain Mechanistic Interpretability - Applications in Real-World Systems
To register your interest in making an article contribution to this collection, please click here: https://www.frontiersin.org/research-topics/69995/artificial-neuroscience-machine-cognition-and-behaviour/participate-in-open-access-research-topic
Topic Editor Edward Large is the founder and CEO of Oscilloscape, Inc, and Topic Editor Ji Chul Kim is employed by, and owns stock in, Oscilloscape, Inc. The other Topic Editors declare no competing interests with regard to the Research Topic subject.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Review
Systematic Review
Technology and Code
Keywords: Mechanistic Interpretability, Artificial Cognition, Neural Computation, Mathematical Analysis of Neural Networks, Behavioral Evaluation of AI, Computational, AI, Neuroscience, Deep Learning
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.