Theoretical Foundations for a Science of Machine Cognition

  • 523

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Summary Submission Deadline 19 January 2026 | Manuscript Submission Deadline 9 May 2026

  2. This Research Topic is currently accepting articles.

Background

The field of cognitive science has been dramatically reshaped by the emergence of large-scale neural architectures, most notably Transformer-based Large Language Models (LLMs). These artificial systems now demonstrate behavioral patterns and cognitive-like capacities that were once hypothetical, prompting fresh debate on their relation to natural intelligence. Recent empirical evidence has shown that LLMs can perform tasks involving reasoning, language processing, and even nuanced forms of understanding, yet the interpretability of these achievements and the depth of their “cognitive” status remain widely contested. New advances in mechanistic interpretability and representational analysis, along with comparative neuroscience methods, have enabled researchers to trace internal computations and relate them to explanatorily rich cognitive phenomena. Nonetheless, the field continues to face unresolved questions concerning the origins and nature of meaning, intentionality, and explanation within both biological and artificial minds.

This Research Topic aims to confront the explanatory and methodological issues that arise from the increasing cognitive sophistication of artificial systems. It seeks to go beyond mere behavioral analogy, challenging reductive or superficial accounts of machine intelligence by critically examining the foundations of cognition in computational systems. By synthesizing insights from philosophy of mind, linguistics, and cognitive neuroscience with new empirical investigations and modeling techniques, the Research Topic endeavors to clarify under what conditions artificial models can be said to “understand,” how meaning and representation might emerge, and what it means to offer a genuine mechanistic explanation for cognitive-like phenomena in machines. Specific questions include: How can internal organizational principles of neural architectures be mapped to cognitive functions? In what ways do computational and human cognition converge and diverge? How does the explanatory landscape shift when linguistic and biological perspectives are integrated? What can systematic comparisons between LLMs and the human brain reveal about shared representational structures, information flow, and functional organization?

The range of this Research Topic encompasses interdisciplinary inquiry into the mechanisms, representations, and meanings produced by large language models and related artificial neural systems, while emphasizing links and contrasts with natural cognition. We welcome contributions that address, but are not limited to, the following themes:

- Mechanistic explanations and interpretability of artificial neural architectures
- Comparative analyses between computational models and human cognitive or neural systems
- Emergence and nature of meaning, representation, and intentionality in artificial agents
- Theoretical accounts of explanation, understanding, and method in AI and cognitive science
- Investigations of linguistic, semantic, and reasoning capacities in LLMs
- Reflections on embodiment, situatedness, and 4E (embodied, embedded, enactive, extended) cognition in computational contexts

We invite theoretical, empirical, methodological, and review articles from across philosophy, psychology, linguistics, neuroscience, and artificial intelligence.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Conceptual Analysis
  • Data Report
  • Editorial
  • FAIR² Data
  • General Commentary
  • Hypothesis and Theory
  • Methods
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Mechanistic explanation, Artificial cognition, Large Language Models, Cognitive architecture, Representation and meaning

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Topic coordinators

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 523Topic views
View impact