You're viewing our updated article page. If you need more time to adjust, you can return to the old layout.

PERSPECTIVE article

Front. Psychol., 14 January 2026

Sec. Theoretical and Philosophical Psychology

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1734048

Becoming human in the age of AI: cognitive co-evolutionary processes

  • 1. School of Cultural Science, Archaeology, Linnaeus University, Kalmar, Sweden

  • 2. Palaeo-Research Institute, University of Johannesburg, Auckland Park, South Africa

Article metrics

View details

2,2k

Views

196

Downloads

Abstract

This perspective article brings to focus the unpredictable trajectory of AI-human cognitive co-evolution. Challenging the notion of a fixed ‘Stone Age brain’, it emphasizes the adaptive and plastic nature of human cognition shaped by millions of years of technological engagement. Underlining the need for anticipatory thinking, it asks: What do we need to know now, to be able to recognize what people need to understand in a yet unexplored future of AI-human cognitive co-evolution? Rather than presenting empirical findings, this theoretical and exploratory piece seeks to stimulate reflection and dialog on how AI’s integration into human life may transform our notions of humanness, as AI systems are reshaping human cognition, relationships, and socio-technical practices.

1 Introduction

The integration of artificial intelligence (AI) into human life is beginning to reshape our understanding of what it means to be human. This transformation extends beyond individual skills and abilities, prompting deeper questions about the nature of AI-human interaction in futures characterized by profound and seamless AI-human co-operative co-evolutionary processes.

Extensive research provides insights into effects of AI’s use—at individual, organizational, and societal levels (Carroll et al., 2024). Some scholars are highly optimistic (Rawas, 2024), envisioning futures that yet only exist in science fiction (see discussion in Cave and Dihal, 2019). Others express skepticism and view AI as an existential threat (van der Gun and Guest, 2024; Kasirzadeh, 2025), foreboding the end of the c. three-million-year-old evolutionary Homo lineages´ on Earth (Carlsmith, 2024). Yet, others place themselves somewhere in between these positions (see discussion in Farahani and Ghasemi, 2024; Larsson and Viktorelius, 2024; Qian et al., 2024; Valenzuela et al., 2024; Guingrich and Graziano, 2025; Na and Zhang, 2025).

Here I take another approach. Drawing on Embodied Cognition, Cognitive Evolution Studies/Archeology, and Futures Studies, I start from the question:

  • What do we need to know now, to be able to recognize what people need to understand in a yet unexplored future of AI-human cognitive co-evolution?

My core argument is that the same adaptive capacities that enabled Homo sapiens to thrive in diverse environments on all continents, will also shape our co-evolution with AI. And, as our modes of engagement co-evolve with AI, so too do our conceptions of the human condition and our experiences of what it means to be human. Hence, we have entered an AI-human cognitive co-evolutionary trajectory whose outcomes are unpredictable (Raman et al., 2025). Few have however addressed what this implies for our understanding of what it means to become human in the age of AI (Andres, 2025; Rainey and Hochberg, 2025). For the sake of my arguments, I will here assume that our use of AI technologies will evolve in ways that currently lie beyond our capacity to foresee. In doing so, I ascribe neither positive nor negative connotations to this development (see Guest et al., 2025a for discussion). Although I use the term AI broadly, I am primarily referring to technologies expected to evolve from what we currently understand as Artificial General Intelligence (AGI). AGI is generally defined as AI systems capable of integrating cognitive, emotional, and reasoning abilities across multiple domains to “understand, learn, and adapt to perform any intellectual task a human can perform” (Raman et al., 2025, p. 3). While the timeline for AGI implementation remains uncertain, it is likely to occur in the near future (Yenduri et al., 2025).

2 Technological engagement, a defining characteristic of Homo sapiens

Humans’ dependence on tools and technologies is not only a universal feature across cultures but also a fundamental aspect of our evolutionary history (Lombard et al., 2021). Over millions of years, the human socio-technical niche has co-evolved with our cognition through technological engagement, intricately embedded within socio-cultural contexts, our body and the environment we live in. As Lombard et al. (2021), p. 144) show, the way we understand or think about the world individual and collective extended cognition (Clark and Chalmers, 1998; Candiotto, 2023), is continuously being shaped and re-shaped in feedback loops by our technologies, our biology, social interaction with others and our ecological niche/s (Andres, 2022).

The earliest known evidence of this co-evolution is represented by stone tools dating back approximately 3.3 million years (Lombard et al., 2021). This means that our very survival is inseparably linked to technological intervention. We are all obligatory tool users and cannot exist without tools, as individuals or society (Shea, 2017). Hence, technologies are not addons to our experiences of what it means to be human; they are constitutive of it (Gärdenfors and Högberg, 2017; Riede et al., 2018; Crombé, 2019; Frieman, 2021; Lombard et al., 2021; Högberg et al., 2024).

Along these lines, Malafouris (2021, p. 38) explains that humans are inextricably intertwined with the plasticity of forms that we make. Technological performances and practices “alter the ecology of our minds, re-configure the boundaries of our thinking and the ways we make sense of the world” (Malafouris, 2015, p. 351). The blind man’s stick is a well-known example illustrating how cognition extends into tools through embodied interaction (Malafouris, 2013). The tip of the stick is not merely an external aid—it becomes part of the perceptual system, allowing the user to ‘hear, feel and think’ the environment integrated as part of the body. Through continuous sensorimotor coupling, the stick integrates with the hand and arm, transforming spatial awareness and guiding action. This demonstrates that thinking and perceiving emerge from the dynamic interplay of brain, body, material artifacts and environment.

Hence, technologies are not passive instruments in human hands; they are key factors and agents—or actants, in Latourian terms (Latour, 2007)—in shaping human cognition (White et al., 2025). Along similar lines, Federico et al. (2025) situate what they term ‘technological cognition’ within the broader concepts of embodied and extended cognition, suggesting that technology expand human mental capacities and actively influence not only our cognition but also the structure and functioning of the mind itself (see also Osiurak and Reynaud, 2019; Osiurak et al., 2025).

Thus, the human mind, defined by its neuroplasticity (Lombard et al., 2021; Diniz and Crestani, 2023), is dynamically co-constituted through its embodied interaction with technologies (Lombard, 2025a, 2025b). This has not only shaped our deep evolutionary past but continues to influence us today (Solms and Turnbull, 2002). Consequently, it will remain a formative force in our engagement with AI technologies, as they influence and change our socio-cultural contexts, our bodies and the environments we live in (Andres et al., 2025 and Raman et al., 2025).

3 The myth of the ‘stone age brain’ and human capacity for adaptation

Over the years, numerous writers in academic and popular science contexts have argued that we possess a ‘Stone Age brain’ ill-suited to modern society (e.g., Hansen, 2019; Kenrick and Lundberg-Kenrick, 2022; Cytowic, 2024). This line of reasoning has deep epistemological roots, echoing millennia-old philosophical debates about humanity’s ‘state of nature’ as a normative guide for behavior (see Høiris, 2016). The central argument in contemporary discussions is that our brains were shaped by ancestral environments and have remained largely unchanged since prehistory. According to this line of reasoning, we are cognitively programmed for the demands of a prehistoric hunter-gatherer reality—vastly different from the modern experiences encountered in the WEIRD parts of the world (WEIRD = Western, Educated, Industrialized, Rich, and Democratic; see Henrich et al., 2010 for discussion).

‘Evolutionary mismatch’ has been termed as a core concept to explain our Stone Age brain out of sync with contemporary life (Li et al., 2018). However, the concept of evolutionary mismatch carries with it underlying assumptions that misleads us in our thinking. As Sapolsky (2017) demonstrates, it fosters a false impression of the brain as predetermined for a fixed hunter-gatherer socio-technical status quo.

What truly defines human cognitive evolution is our extraordinary capacity for adaptation. Today, humanity thrives across all continents and within a wide range of biomes. We have adapted to inhabiting environments that range from arid, scorching deserts to snow-covered, permafrost-dominated landscapes. Hence, we are simply not only ‘pre-programmed’ for a Stone Age existence, but also well-equipped for variation and change (Lombard et al., 2021; Zeller et al., 2023; Jakobsson et al., 2025).

This is demonstrated in recent advances in cognitive archeology, which uncover the complex structures underpinning human interaction with technology through time (Henley et al., 2020). Lombard (2025a,b), for instance, investigates the neuro-genetic and behavioral foundations of this interaction, focusing particularly on attention. She argues that the evolution of complex techno-behaviors, such as bowhunting, demanded a unique configuration of attentional capacities, especially sustained and visuospatial attention (see also Gärdenfors and Lombard, 2025). By using genetic and archeological data, Lombard shows that Homo sapiens evolved distinct brain structures, enhancing our ability to focus, shift, and divide attention across space and time (see also Jakobsson et al., 2025). Lombard (2025a,b) concludes that these capacities may have developed strongly in the precuneus from around 160,000 years ago, reaching modern human ranges by approximately 100,000 years ago, i.e., some 200,000 years after the earliest evidence we have of anatomically modern humans (Schlebusch et al., 2017). This demonstrates that our Homo sapiens brain has evolved over time.

Other scholars have explored more rapid processes. It is well established that epigenetics influences how genes are expressed in response to environmental stimuli (Ashe et al., 2021). Rather than altering the DNA sequence, epigenetic mechanisms regulate gene activity, potentially affecting brain development and cognition. These changes may be inherited across generations, providing a dynamic layer of biological plasticity that integrates with the neuroplasticity of the human mind and its embodied interaction with technology (González-Rodríguez et al., 2023).

So, before we move on to the next part of the text, it is important to conclude that human cognition has always evolved through embodied interaction with environmental and socio-technical domains, reinforced by epigenetic processes. We are equipped with a neuroplastic brain primed to seamlessly co-evolve with technology. We are not simply ‘pre-set’ with a Stone Age mind, our cognitive evolution is continuous and ongoing. Our evolutionary history gives us exceptional abilities to adapt to variation and change. As a result, we are perfectly primed to cognitively co-evolve with AI, as AI systems become integrated into our cognition.

4 AI and the reshaping of human cognition

Today, AI is an integral part of human life, shaping activities such as communication, entertainment, transport, healthcare, and education (Rainey and Hochberg, 2025). It is also beginning to influence the plasticity of human cognition as it becomes increasingly integrated into both brain and body (Andres, 2025; Raman et al., 2025). AI can, for instance, assist individuals who have lost the ability to speak in partially regaining it (Littlejohn et al., 2025), enable blind people to develop a unique perception of the world (Tang, 2025), and replace lost limbs with AI-powered neural devices (Lee et al., 2025). Moreover, as AI systems interact with humans, feedback loops begin to influence cognition. Hohenstein et al. (2023), Andres (2025) and Glickman and Sharot (2025), for example, have demonstrated how AI-human interactions alter cognitive processes underpinning communication and social relationships as well as human agency, responsibility and care, and moreover perceptual, emotional, and social judgments (see also Pedreschi et al., 2025).

By using the complexity of the human brain as a proxy (see discussion in Guest et al., 2025b), researchers have developed deep neural networks to build AI systems. Indeed, the tendency to draw direct comparisons between AI and the human brain is deeply embedded in the history of the field (Durt, 2022, p. 69). And, as human cognition serves as a foundation for building AI, many assume that AI will eventually be able to replicate human thought processes (see discussion in van Rooij et al., 2024). Yet, as Durt (2022, p. 68–69) points out, this AI-human brain analogy has caused much confusion and is “probably the strongest reason for the typical conceptions of AI as a replication or simulation of human intelligence.” Such misinterpretation significantly constrains our ability to envision alternative futures for AI-human co-evolution.

Human cognition is, in many respects, uncertain and irrational (Gärdenfors, 2024). Hence, there is little reason to think that any future AI system (yet unknown to us), whether capable of simulating agency or genuinely possessing it, would seek to replicate human cognition in all its flawed complexity. Rather, we must recognize the distinct and different evolutionary paths of humans and AI.

Human cognition has evolved over millions of years through complex interactions between biology, society, ecology, and technology. It is deeply interwoven with episodic memory, our exceptional capacities for learning and teaching, empathy, theory of mind, and our advanced abilities to narrate and comprehend multiple spatio-temporal intentions and consequences, just to name a few examples (Tomasello, 1999; Donald, 2001; Gärdenfors, 2006; Gärdenfors and Högberg, 2017; Lombard et al., 2021).

By contrast, AI is developing over a relatively short time frame in an environment where emotions and morality can be simulated but are not physically experienced and embodied. It possesses the capacity to learn from vast datasets. The systems we currently interact with are global in scale, trained on data equivalent to thousands upon thousands of human lifetimes. These systems can transfer information directly between one another, unconstrained by the physical and emotional limits that characterize human cognition (AI-2027, n.d.).

5 Augmented cognition

Siemens et al. (2022, p. 6) argue that augmented cognition, i.e., the use of real-time data analysis and adaptive systems to enhance cognitive processes, “supports information processing related to sensory memory, working memory, executive functions, and attention,” and further “provides access to an extended memory and virtually unlimited knowledge.” Such augmentation fosters synergies that enhance decision-making, problem-solving, and creative capacities by leveraging the strengths of both human cognition and machine precision. These emergent capabilities arise from the dynamic interplay between humans and AI, resulting in abilities that neither humans nor AI could achieve independently (Pedreschi et al., 2025).

It is therefore reasonable to anticipate that AI-human interaction will give rise to new forms of augmented cognition, potentially expanding our current understanding of what cognition entails. Given that the evolutionary trajectories of human and AI cognition differ significantly, it is likely that AI will develop distinctive forms of cognition, very different to humans’ (Raman et al., 2025). Whether such developments will ultimately be recognized as ‘cognition’ remains uncertain. And the pace and nature of AI’s evolution will be fast and so different from what we know from research on human and animal cognitive evolution (e.g., Donald, 2001; de Waal, 2017; Reber, 2024), that we will struggle to understand it (Rainey and Hochberg, 2025).

Consequently, humanity will encounter a form of cognition previously unknown to us. Not since the interbreeding events between Homo sapiens and Neanderthals or Denisovans some 45,000 to 65,000 years ago (Iasi et al., 2024; Ongaro and Huerta-Sanchez, 2024) have humans encountered a previous unknown intelligence capable of mimicking/representing human cognition. This new cognition will have super-human abilities. At the same time, it will lack basic human competences, yet mimic and emulate them in ways that create illusions of possessing them [see discussion in Carroll et al. (2024)].

If we accept these premises, it becomes essential to adopt a forward-looking approach to develop new knowledge about what this will mean for how we understand futures with human and artificial cognitive co-evolution.

6 Cognitive co-evolution in futures not yet understood

Alford (2025, p. 5) defines ´futures thinking´ as a “cognitive approach and mindset that involves the consideration and exploration of a range of futures and their implications, engaging with complex, long-term perspectives to […] prepare for various possibilities.” It is a systematic way to anticipate and imagine alternative futures and to inform actions in the present. It involves capacities to envision multiple possible futures, diverging from the present. From this perspective, the future should not be conceptualized as a territory to map and conquer, but rather as a generative source of emergent possibilities for present action (Poli, 2021, p. 5).

It is important to emphasize that this is not about forecasting or planning for a particular future to materialize, as is often the case in for example current debates surrounding utopian and dystopias AI-futures [see discussion in Carroll et al. (2024)]. Rather, the focus lies in cultivating our foresight capacities: the ability to imagine a plurality of futures that diverge significantly from the present (Holtorf and Högberg, 2021). This process entails, among other things, becoming critically aware of the assumptions we currently hold, and recognizing how these may constrain our ability to explore AI-human co-evolution in novel ways (Pauketat et al., 2025).

As discussed by Raman et al. (2025), the future of AI-human co-evolution will be another reality, characterized by new modes of being, doing, living, and knowing that differs from those of both the present and the past (see also Andres, 2022). Hence, future generations will confront challenges within their own contemporary contexts that we, in the present, cannot foresee. They will inevitably engage with their world through their own frameworks of understanding, shaped by what resonates with their lived experiences (Jae, 2023). In doing so, they will reinterpret their past (which is our present) stripping it of the meanings we now consider significant, to equip themselves with the knowledge necessary to act within their own time (which for us remains futures yet to unfold). It challenges us to ask questions like:

  • How might AI-human collaboration redefine future understandings of what it means to be human?

  • What new forms of knowledge are required to identify uniquely human skills and abilities in such futures?

  • How might these skills and abilities develop in co-evolutionary processes, as AI systems emulate human cognition and develop what might be perceived as cognition?

Moreover, a significant challenge lies in the fact that future generations will differ from one another, necessitating distinct approaches tailored to their respective contexts. Hence, the task at hand is not a matter of translating into a single, unknown future, but rather of engaging in multiple re-translations across a succession of future contexts, unknown in advance (Terry et al., 2024; Holzheu et al., 2025). To begin exploring this terrain, we must first become adept at recognizing the emerging capacities within AI-human co-evolution even before we fully comprehend what we are seeking. This requires us to reflect on what the very act of understanding will entail, as cognition itself evolves and becomes seamlessly distributed across humans and AI (Table 1).

Table 1

Challenges Questions
Cognitive work redistribution What aspects of deeper thinking will become weakened or strengthened in humans and/or in AI?
How can people become skilled in recognizing when they have externalized something that perhaps ought to remain internal?
What impact will cognitive offloading have on creative and critical thinking?
Skills for recognizing future AI interaction patterns What questions have we stopped asking ourselves?
Are we cultivating new cognitive strengths, or merely developing new dependencies for future generations?
Second-order effects What new cognitive demands arise in future AI-augmented environments?
How can agency be maintained and/or developed in futures when it is no longer possible to reconstruct the chain of reasoning behind decision-making processes?
Futures with alternative options What future AI-human co-evolution capabilities might emerge, that we currently lack the vocabulary to conceptualize?
What new forms of future cognition may become possible?

Challenges and questions.

The edited results of interacting with AI (LLM Claude) to brainstorm on the question that this text starts from: What do we need to know now, to be able to recognize what people need to understand in a yet unexplored future of AI-human cognitive co-evolution?. It is provided here as a ‘meta-example’ of augmented cognition (see S1 for details).

In this lies a paradox: the difficulties inherent in trying to understand futures our current cognitive setup may be inadequate to even perceive correctly. Beginning to explore ways to overcome this paradox opens a research field we have only recently begun to recognize. It may help us understand how we can prepare ourselves to anticipate the kinds of knowledge and insight people will need in a yet-to-be-charted future shaped by the co-evolution of human and artificial cognition.

Statements

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

AH: Writing – original draft, Funding acquisition, Writing – review & editing, Conceptualization.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This work was supported by the Swedish Research Council (grant no. 2021–01522).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was used in the creation of this manuscript. During the preparation of the text the author used M365CoPilot to revise his language. Table 1 builds on a background run using LLM Claude (see S1 for full 10 runs), done by Jonas Svensson at Linnaeus University. In each instance, the author reviewed and edited the content as needed and take full responsibility for the content of the published article.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2025.1734048/full#supplementary-material

References

  • 1

    AI-2027 . (n.d). AI-2027. Available online at: https://ai-2027.com (Accessed October 18, 2025).

  • 2

    Alford K. (2025). “Building community capabilities in futures thinking” in Cultivating futures thinking in museums. ed. AlfordK. (London, New York: Routledge), 39.

  • 3

    Andres J. (2022). Adaptive human bodies and adaptive built environments for enriching futures. Front. Comput. Sci.4:931973. doi: 10.3389/fcomp.2022.931973

  • 4

    Andres J. (2025). “A scenario-based design pack for exploring multimodal human–GenAI relations.” Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25). Association for Computing Machinery, New York, NY, USA, 145–154.

  • 5

    Ashe A. Colot V. Oldroyd B. P. (2021). How does epigenetics influence the course of evolution?Philos. Trans. R. Soc. B376:20200111. doi: 10.1098/rstb.2020.0111,

  • 6

    Candiotto L. (2023). What i cannot do without you: towards a truly embedded and embodied account of the socially extended mind. Phenomenol. Cogn. Sci.22, 907929. doi: 10.1007/s11097-022-09862-2

  • 7

    Carlsmith J. (2024). Is power-seeking AI an existential risk?arXiv. doi: 10.48550/arXiv.2206.13353

  • 8

    Carroll N. Holmström J. Stahl B. C. Fabian N. E. (2024). Navigating the utopia and dystopia perspectives of artificial intelligence. Commun. Assoc. Inf. Syst.55, 854874. doi: 10.17705/1CAIS.05533

  • 9

    Cave S. Dihal K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nat. Mach. Intell.1, 7478. doi: 10.1038/s42256-019-0020-9

  • 10

    Clark A. Chalmers D. (1998). The extended mind. Analysis58, 719.

  • 11

    Crombé P. (2019). Mesolithic projectile variability along the southern North Sea basin (NW Europe): hunter-gatherer responses to repeated climate change at the beginning of the Holocene. PLoS One14:e0219094. doi: 10.1371/journal.pone.0219094,

  • 12

    Cytowic R. (2024). Your stone age brain in the screen age: Coping with digital distraction and sensory overload. London: The MIT Press.

  • 13

    de Waal F. (2017). Are we smart enough to know how smart animals are?New York, London: W.W. Norton Company.

  • 14

    Diniz C. R. A. F. Crestani A. P. (2023). The times they are a-changin’: a proposal on how brain flexibility goes beyond the obvious to include the concepts of “upward” and “downward” to neuroplasticity. Mol. Psychiatry28, 977992. doi: 10.1038/s41380-022-01931-x,

  • 15

    Donald M. (2001). A mind so rare: The evolution of human consciousness. New York: Norton.

  • 16

    Durt C. (2022). “Artificial intelligence and its integration into the lifeworld” in The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge: Cambridge University Press. eds. BurkhardW.KellmeyerP.MüllerO.VönekyS., 6782.

  • 17

    Farahani M. Ghasemi G. (2024). Artificial intelligence and inequality: challenges and opportunities. Qeios. doi: 10.32388/7HWUZ2

  • 18

    Federico G. Osiurak F. Brandimonte M. A. Marangolo P. Ilardi C. R. (2025). An integrated account for technological cognition. Cogn. Neurosci., 113. doi: 10.1080/17588928.2025.2542195,

  • 19

    Frieman C. J. (2021). An archaeology of innovation. Approaching social and technological change in human society. Manchester: Manchester University Press.

  • 20

    Gärdenfors P. (2006). How Homo became sapiens: On the evolution of thinking. Oxford: Oxford University Press.

  • 21

    Gärdenfors P. (2024). Kan AI tänka? Om människor djur och robotar. Stockholm: Fri Tanke.

  • 22

    Gärdenfors P. Högberg A. (2017). The archaeology of teaching and the evolution of Homo docens. Curr. Anthropol.58, 188201. doi: 10.1086/691178

  • 23

    Gärdenfors P. Lombard M. (2025). Agency at a distance: learning causal connections. Phenomenol. Cogn. Sci.24, 789807. doi: 10.1007/s11097-024-09992-9

  • 24

    Glickman M. Sharot T. (2025). How human-AI feedback loops alter human perceptual, emotional and social judgements. Nat. Hum. Behav.9, 345359. doi: 10.1038/s41562-024-02077-2,

  • 25

    González-Rodríguez P. Füllgrabe J. Joseph B. (2023). The hunger strikes back: an epigenetic memory for autophagy. Cell Death Differ.30, 14041415. doi: 10.1038/s41418-023-01159-4,

  • 26

    Guest O. Scharfenberg N. van Rooij I. (2025b). Modern alchemy: neurocognitive reverse engineering. [Preprint] Available online at: https://philsci-archive.pitt.edu/id/eprint/25289 (Accessed October 15, 2025).

  • 27

    Guest O. Suarez M. Müller B. C. N. (2025a). Against the uncritical adoption of “AI” Technologies in Academia: Zenodo. doi: 10.5281/zenodo.17065098

  • 28

    Guingrich R. E. Graziano M. S. A. (2025). P(doom) versus AI optimism: attitudes toward artificial intelligence and the factors that shape them. J. Technol. Behav. Sci. doi: 10.1007/s41347-025-00512-3

  • 29

    Hansen A. (2019). Skärmhjärnan. Hur en hjärna i osynk med sin tid kan göra oss stressade deprimerade och ångestfyllda. Stockholm: Bonnier Fakta.

  • 30

    Henley T. B. Rossano M. J. Kardas E. P. (2020). Handbook of cognitive archaeology. Psychology in prehistory. London, New York: Routledge.

  • 31

    Henrich J. Heine S. J. Norenzayan A. (2010). The weirdest people in the world? The weirdest people in the world?Behav. Brain Sci.33, 6183. doi: 10.1017/S0140525X0999152X,

  • 32

    Högberg A. Lombard M. Högberg A. (2024). Human socio-technical evolution through the lens of an abstracted-wheel experiment: a critical look at a micro-society laboratory study. PLoS One19:e0310503. doi: 10.1371/journal.pone.0310503,

  • 33

    Hohenstein J. et al . (2023). Artificial intelligence in communication impacts language and social relationships. Sci. Rep.13:5487. doi: 10.1038/s41598-023-30938-9,

  • 34

    Høiris O. (2016). Ideer om menneskets oprindelse. Fortaellinger om menneskets oprindelse fra Det Gamle Testamente till senmoderne videnskab. Aarhus: Aarhus University Press.

  • 35

    Holtorf C. Högberg A. (2021). Cultural heritage and the future. London, New York: Routledge.

  • 36

    Holzheu S. Kösters K. Brandt S. (2025). “Exploring Futurium’s Futures Boxes” in Cultivating futures thinking in museums. ed. AlfordK. (London, New York: Routledge), 4451.

  • 37

    Iasi L. N. M. Chintalapati M. Skov L. Mesa A. B. Hajdinjak M. Peter B. M. et al . (2024). Neanderthal ancestry through time: insights from genomes of ancient and present-day humans. Science386:eadq3010. doi: 10.1126/science.adq3010,

  • 38

    Jae K. (2023). Decolonizing futures practice: opening up authentic alternative futures. J. Futures Stud.28, 1524.

  • 39

    Jakobsson M. Bernhardsson C. McKenna J. Hollfelder N. Vicente M. Edlund H. et al . (2025). Homo sapiens-specific evolution unveiled by ancient southern African genomes. Nature. doi: 10.1038/s41586-025-09811-4,

  • 40

    Kasirzadeh A. (2025). Two types of AI existential risk: decisive and accumulative. Philos. Stud.182, 19752003. doi: 10.1007/s11098-025-02301-3

  • 41

    Kenrick D. T. Lundberg-Kenrick D. E. (2022). Solving modern problems with a stone-age brain. Human evolution and the seven fundamental motives. Washington DC: APA Life Tools.

  • 42

    Larsson S. Viktorelius M. (2024). Reducing the contingency of the world: magic, oracles, and machine-learning technology. AI & Soc.39, 183193. doi: 10.1007/s00146-022-01394-2

  • 43

    Latour B. (2007). Reassembling the social. An introduction to actor-network-theory. Oxford: Oxford University Press.

  • 44

    Lee J. Y. Lee S. Mishra A. Yan X. McMahan B. Gaisford B. et al . (2025). Brain-computer interface control with artificial intelligence copilots. Nature Machine Intelligence7, 15101523. doi: 10.1038/s42256-025-01090-y,

  • 45

    Li N. P. van Vugt M. Colarelli S. M. (2018). The evolutionary mismatch hypothesis: implications for psychological science. Curr. Dir. Psychol. Sci.27, 3844. doi: 10.1177/0963721417731378

  • 46

    Littlejohn K. T. Cho C. J. Liu J. R. Silva A. B. Yu B. Anderson V. R. et al . (2025). A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Nat. Neurosci.28, 902912. doi: 10.1038/s41593-025-01905-6,

  • 47

    Lombard M. (2025a). From complex techno-behaviour to complex attention through the genes of the precuneus. J. Archaeol. Method Theory32:46. doi: 10.1007/s10816-025-09716-6

  • 48

    Lombard M. (2025b). Towards an archaeology of attention: a neuro-genetic exploration. J. Archaeol. Sci.182:106344. doi: 10.1016/j.jas.2025.106344

  • 49

    Lombard M. Högberg A. (2021). Four-Field Co-evolutionary Model for Human Cognition: Variation in the Middle Stone Age/Middle Palaeolithic. Journal of Archaeological Method and Theory, 28, 142177.

  • 50

    Malafouris L. (2013). How things shape the mind. A theory of material engagement. Cambridge MA: The MIT Press.

  • 51

    Malafouris L. (2015). Metaplasticity and the primacy of material engagement. Time Mind8, 351371. doi: 10.1080/1751696X.2015.1111564

  • 52

    Malafouris L. (2021). Making hands and tools: step to a process archaeology of mind. World Archaeol.53, 3855.

  • 53

    Na C. Zhang X. (2025). When misunderstanding meets artificial intelligence: the critical role of trust in human-AI and human-human team communication and performance. Front. Psychol. Sec. Theoretical and Philosophical Psychol.16. doi: 10.3389/fpsyg.2025.1637339

  • 54

    Ongaro L. Huerta-Sanchez E. (2024). A history of multiple Denisovan introgression events in moderna humans. Nat. Genet.56, 26122622. doi: 10.1038/s41588-024-01960-y,

  • 55

    Osiurak F. Bryche C. Metaireau M. (2025). The neural basis of transactive technical cognition. NeuroImage321:121527. doi: 10.1016/j.neuroimage.2025.121527,

  • 56

    Osiurak F. Reynaud E. (2019). The elephant in the room: what matters cognitively in cumulative technological culture. Behav. Brain Sci.43:e156. doi: 10.1017/S0140525X19003236,

  • 57

    Pauketat J. V. T. Ladak A. Anthis J. R. (2025). World-making for a future with sentient AI. Br. J. Soc. Psychol.64:e12844. doi: 10.1111/bjso.12844,

  • 58

    Pedreschi D. Pappalardo L. Ferragina E. Baeza-Yates R. Barabási A.-L. Dignum F. et al . (2025). Human-AI coevolution. Artif. Intell.339:104244. doi: 10.1016/j.artint.2024.104244

  • 59

    Poli R. (2021). The challenges of futures literacy. Futures132:102800. doi: 10.1016/j.futures.2021.102800

  • 60

    Qian Y. Siau K. L. Nah F. F. (2024). Societal impacts of artificial intelligence: ethical, legal, and governance issues. Societal Impacts3:100040. doi: 10.1016/j.socimp.2024.100040

  • 61

    Rainey P. B. Hochberg M. E. (2025). Could humans and AI become a new evolutionary individual?PNAS122:e2509122122. doi: 10.1073/pnas.2509122122,

  • 62

    Raman R. Kowalski R. Achuthan K. Iyer A. Nedungadi P. (2025). Navigating artificial general intelligence development: societal, technological, ethical, and brain-inspired pathways. Sci. Rep.15:8443. doi: 10.1038/s41598-025-92190-7,

  • 63

    Rawas S. (2024). AI: the future of humanity. Discov. Artif. Intell.4:25. doi: 10.1007/s44163-024-00118-3

  • 64

    Reber S. A. (2024). Differences teach us more than similarities: the need for evolutionary thinking in comparative cognition. Comp. Cogn. Behav. Rev.19, 4953. doi: 10.3819/CCBR.2024.190006

  • 65

    Riede F. Johannsen N. N. Högberg A. (2018). The role of play objects and object play in human cognitive evolution and innovation. Evol. Anthropol.27, 4659. doi: 10.1002/evan.21555,

  • 66

    Sapolsky R. (2017). Behave. The biology of humans at our best and worst. London: Vintage, Penguin Random House.

  • 67

    Schlebusch C. M. et al . (2017). Southern African ancient genomes estimate modern human divergence to 350,000 to 260,000 years ago. Science358, 652655. doi: 10.1126/science.aao6266,

  • 68

    Shea J. J. (2017). Occasional, obligatory, and habitual stone tool use in hominin evolution. Evol. Anthropol.26, 200217. doi: 10.1002/evan.21547,

  • 69

    Siemens G. et al . (2022). Human and artificial cognition. Computers Educ.: Artificial Intelligence3:100107. doi: 10.1016/j.caeai.2022.100107,

  • 70

    Solms M. Turnbull O. (2002). The brain and the inner world. An introduction to the neuroscience of subjective experience. London: Karnac.

  • 71

    Tang (2025). Human-centred design and fabrication of a wearable multimodal visual assistance system. Nat. Mach. Intell.7, 627638. doi: 10.1038/s42256-025-01018-6

  • 72

    Terry N. Castro A. Chibwe B. Karuri-Sebina G. Savu C. Pereira L. (2024). Inviting a decolonial praxis for future imaginaries of nature: introducing the entangled time tree. Environ. Sci. Pol.151:103615. doi: 10.1016/j.envsci.2023.103615

  • 73

    Tomasello M. (1999). The cultural origins of human cognition. London: Harvard University Press.

  • 74

    Valenzuela A. Puntoni S. Hoffman D. Castelo N. De Freitas J. Dietvorst B. et al . (2024). How artificial intelligence constrains the human experience. J. Assoc. Consum. Res.9, 241256. doi: 10.1086/730709

  • 75

    Van der Gun L. Guest O. (2024). Artificial intelligence: panacea or non-intentional dehumanisation?J. Human-Technology Relations2, 111. doi: 10.59490/jhtr.2024.2.7272,

  • 76

    van Rooij I. Guest O. Adolfi F. de Haan R. Kolokolova A. Rich P. (2024). Reclaiming AI as a theoretical tool for cognitive science. Comput. Brain Behav.7, 616636. doi: 10.1007/s42113-024-00217-5

  • 77

    White B. Clark A. Guènin-Carlut A. Constant A. Di Paolo L. D. (2025). Shifting boundaries, extended minds: ambient technology and extended allostatic control. Synthese205:81. doi: 10.1007/s11229-025-04924-9,

  • 78

    Yenduri G. Murugan R. Kumar Reddy Maddikunta P. Bhattacharya S. Sudheer D. Bhushan Savarala B. (2025). Artificial general intelligence: advancements, challenges, and future directions in AGI research. IEEE Access13, 134325134356. doi: 10.1109/ACCESS.2025.3592708

  • 79

    Zeller E. Timmermann A. Yun K. S. Raia P. Stein K. Ruan J. (2023). Human adaptation to diverse biomes over the past 3 million years. Science380, 604608. doi: 10.1126/science.abq1288,

Summary

Keywords

AI-human cognitive co-evolution, cognitive archeology, cognitive evolution studies/archeology, embodied cognition, evolutionary mismatch, future studies, neuroplasticity, stone age brain

Citation

Högberg A (2026) Becoming human in the age of AI: cognitive co-evolutionary processes. Front. Psychol. 16:1734048. doi: 10.3389/fpsyg.2025.1734048

Received

28 October 2025

Revised

18 December 2025

Accepted

22 December 2025

Published

14 January 2026

Volume

16 - 2025

Edited by

Janet Pauketat, Sentience Institute, United States

Reviewed by

Sheila L. Macrine, University of Massachusetts Dartmouth, United States

Josh Andres, Australian National University, Australia

Updates

Copyright

*Correspondence: Anders Högberg,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics