EDITORIAL article
Front. Robot. AI
Sec. Computational Intelligence in Robotics
This article is part of the Research TopicTheory of Mind in Robots and Intelligent SystemsView all 7 articles
Theory of Mind in Robots and Intelligent Systems
Provisionally accepted- 1University of Southern California, Los Angeles, United States
- 2Carnegie Mellon University, Pittsburgh, United States
- 3Rice University, Houston, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The hope and, in some cases, fear that intelligent machines will understand the mental states of their human counterparts, that is, have a theory of mind (ToM), has been with us since the advent of the idea that machines may one day be as intelligent as us. Early evidence of this was found in responses to ELIZA, Joseph Weizenbaum's script-based agent for studying natural language communication between man and machine Weizenbaum (1966). Weizenbaum's study participants reported positive interactions with the agent, even hinting that it actually understood their psychological needs, as if it had the ability to represent their mental states. More recently, researchers examined the aptitude of large language models (LLMs) in completing classic tests of ToM reasoning and found that in at least some cases, they are achieving human-level capabilities Strachan et al. (2024). Despite such impressive achievements in machine ToM, research is still needed to realize robust ToM for robots and other intelligent systems. For example, trivial alterations to classic ToM tasks can undermine the performance of LLM-based machine intelligences Ullman (2023). The goal of this Research Topic is two-fold: 1) improve the state-of-the-art ToM models adapted from cognitive science for robots and 2) advance new models of social cognition developed for the unique challenges of robots and intelligent systems. Both sub-goals are rich with research challenges. The papers in this Research Topic explore Theory of Mind from multiple perspectives, spanning humanrobot coordination, assessment methodologies, and collective intelligence. The selected contributions demonstrate the breadth of contemporary ToM research, from engineering real-time collaborative systems to developing frameworks for benchmarking socio-cognitive abilities in artificial agents. These works examine ToM at multiple scales-from dyadic human-robot interactions to emergent dynamics in multiagent teams-advancing both our theoretical understanding of mental state reasoning and its practical implementation in artificial intelligence and robotics. Effective human-robot collaboration requires agents to anticipate each other's actions and achieve coordination with minimal explicit communication. The papers in this section explore computational mechanisms that enable artificial agents to reason about human mental states and leverage this understanding for seamless coordination. These contributions span sparse communication strategies, dynamic real-time coordination in heterogeneous teams, and flexible collaborative patterns that emerge without predefined roles.Jiang et al. propose a relevance model grounded in decision theory and theory of mind to explain how humans select information for communication under real-time constraints Jiang et al. (2025). Tested in a simulated navigation task where participants and AI agents cooperatively avoid traps, the model accurately predicts human communication choices and outperforms the GPT-4 LLM in the same cooperative scenario.The work demonstrates that when humans receive assistance from an AI agent using the relevance model, they achieve significantly higher performance and provide higher ratings compared to a heuristic-based approach. As ToM capabilities become increasingly central to artificial intelligence research, the field requires robust frameworks for evaluating and developing these capacities in computational systems. The papers in this section address this need by proposing novel assessment approaches and experimental platforms. One contribution investigates the computational modeling of higher-order ToM-moving beyond simple mental state attribution to reasoning about nested beliefs. The other presents a comprehensive developmental framework grounded in psychology that provides structured environments for studying socio-cognitive abilities in both reinforcement learning agents and LLMs.Tavella et al. emphasize the current literature's focus on first-order ToM models and investigate the potential for creating computational models of higher-order ToM. Higher-order ToM involves reasoning about nested mental states (e.g., "I think that you think that she believes..."), which is crucial for sophisticated social interactions throughout human development Tavella et al. (2024). By incorporating higher-order ToM in AI systems, artificial agents could better coordinate complex actions in domains such as warehouse logistics and healthcare, where understanding multiple layers of perspective-taking enhances collaborative performance. Kovač et al. present The SocialAI School, a framework that leverages developmental psychology to study artificial socio-cultural agents Kovač et al. (2024). Drawing inspiration from Michael Tomasello and Jerome Bruner's work on socio-cognitive development, they outline a broader set of concepts than typically studied in AI, including social cognition (joint attention, perspective taking), communication, social learning, formats, and scaffolding. Their tool offers a customizable suite of procedurally generated environments that can be used with both multimodal reinforcement learning agents and text-based LLMs, providing the AI community with a versatile platform for investigating how agents can enter, learn from, and contribute to a surrounding culture. While individual ToM capabilities enable dyadic coordination, the complexity of multi-agent systems introduces emergent properties that arise from the interplay of multiple minds reasoning about each other. This section examines how ToM functions at the collective level, exploring how artificial social intelligence integrates into human team dynamics. This contribution bridges individual cognitive mechanisms with system-level outcomes, demonstrating that ToM's impact extends beyond pairwise interactions to fundamentally shape team effectiveness and collective problem-solving.Bendell et al. examine the integration of Artificial Social Intelligence (ASI) into human teams, focusing on how ASI can enhance teamwork processes in complex tasks Bendell et al. (2025). In their study, teams of three participants collaborated with ASI advisors designed to exhibit Artificial Theory of Mind (AToM) while engaged in an interdependent task. Using a profiling model to categorize teams based on taskwork and teamwork potential, they found that teams with higher potential in these dimensions had more positive perceptions of team processes and ASI advisors. Notably, while team performance mediated perceptions of team processes, perceptions of ASI advisors were positively correlated with team potential independent of performance outcomes, highlighting the need for ASI systems to be adaptable and responsive to specific team characteristics.
Keywords: Theory of Mind, social cognition, Artificial social intelligence, human robot interaction, decision theoretic models of socialreasoning, Bayesian Theory of Mind, Cognitive architectures, Computational Models of Cognition
Received: 19 Nov 2025; Accepted: 01 Dec 2025.
Copyright: © 2025 Gurney, Hughes, Pynadath and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Nikolos Gurney
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.