AUTHOR=Angius Nicola , Perconti Pietro , Plebe Alessio , Acciai Alessandro TITLE=Making sense of transformer success JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1509338 DOI=10.3389/frai.2025.1509338 ISSN=2624-8212 ABSTRACT=This article provides an epistemological analysis of current attempts of explaining how the relatively simple algorithmic components of neural language models (NLMs) provide them with genuine linguistic competence. After introducing the Transformer architecture, at the basis of most of current NLMs, the paper firstly emphasizes how the central question in the philosophy of AI has been shifted from “can machines think?”, as originally put by Alan Turing, to “how can machines think?”, pointing to an explanatory gap for NLMs. Subsequently, existing explanatory strategies for the functioning of NLMs are analyzed to argue that they, however debated, do not differ from the explanatory strategies used in cognitive science to explain intelligent behaviors of humans. In particular, available experimental studies turned to test the theory of mind, discourse entity tracking, and property induction in NLMs are examined under the light of the functional analysis in the philosophy of cognitive science; the so-called copying algorithm and the induction head phenomenon of a Transformer are shown to provide a mechanist explanation of in-context learning; finally, current pioneering attempts to use NLMs to predict brain activation patterns when processing language are here shown to involve what we call a co-simulation, in which a NLM and the brain are used to simulate and understand each other.