HYPOTHESIS AND THEORY article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
This article is part of the Research TopicAI and Neuroscience: Integrating Knowledge, Reasoning, and Theory of MindView all 7 articles
The extended hollowed mind: Why foundational knowledge is indispensable in the age of AI
Provisionally accepted- 1Clinic of Internal Medicine, University Hospital Bonn, Bonn, Germany
- 2Department of Computer Science, Rheinische Friedrich-Wilhelms-Universitat Bonn, Bonn, Germany
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generative artificial intelligence (AI) presents a fundamental duality for education: it simultaneously offers powerful cognitive extension while posing a significant risk of cognitive atrophy. This paper introduces the 'hollowed mind' as a conceptual framework to understand this risk—a state of dependency where the frictionless availability of AI-generated answers enables users to systematically bypass the effortful cognitive processes essential for deep learning. We argue this dynamic is driven by the 'Sovereignty Trap': a psychological mechanism where the AI's authoritative competence tempts users to cede their own intellectual judgment, mistaking access to information for genuine ability. To substantiate this claim, we synthesize a multi-disciplinary body of evidence from cognitive science (e.g., dual-process theory, cognitive load), neurobiology (e.g., conflict-monitoring networks), and developmental psychology. We use this foundation to explain the widely documented 'Expertise Duality'—why AI acts as a 'leveler' for novices but an 'amplifier' for experts. Moving beyond critique, this paper posits that the central challenge is one of environmental design, not user competence. We propose the 'Fortified Mind' as the pedagogical goal: a resilient internal architecture of indispensable knowledge and metacognitive skills required to achieve genuine 'Cognitive Sovereignty'. Finally, we outline a forward-looking research agenda focused on redesigning AI tools from 'answer engines' into cognitive training environments that promote effortful engagement. Our work provides a robust conceptual guide for educators, researchers, and system designers, arguing that in the age of AI, the cultivation of fundamental knowledge is not just relevant, but more crucial than ever.
Keywords: Artificial intelligence in education, Cognitive Sovereignty, Sovereignty Trap, Cognitive Atrophy, Choice architecture, foundational knowledge
Received: 05 Oct 2025; Accepted: 20 Nov 2025.
Copyright: © 2025 Klein and Klein. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Christian R. Klein, christian.klein@ukbonn.de
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
