PERSPECTIVE article
Front. Psychol.
Sec. Media Psychology
Volume 16 - 2025 | doi: 10.3389/fpsyg.2025.1565170
This article is part of the Research TopicHuman Reactions to Artificial Intelligence with Anthropomorphic FeaturesView all 4 articles
Are Artificial Agents Perceived Similarly to Humans? Knowns, Unknowns and the Road Ahead
Provisionally accepted- 1Division of Cognitive Psychology, Perception and Research Methods, Clinical Neuroscience Bern, University of Bern, Bern, Bern, Switzerland
- 2Department of Psychology III, Faculty of Medicine, University of Würzburg, Würzburg, Bavaria, Germany
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Starting long before modern generative artificial intelligence (AI) tools became available, technological advances in constructing artificial agents spawned investigations into the extent to which interactions with such non-human counterparts bear resemblance to human-humaninteractions. Although artificial agents are typically not ascribed a mind of their own in the same sense as humans, several researchers concluded that social presence with or social influence from artificial agents can resemble that seen in interactions with humans in important ways. Here we critically review claims about a comparability between human-agent interactions and humanhuman-interactions, outlining methodological approaches and challenges which predate the AI era but continue to influence work in the field. By connecting novel work on AI tools with broader research in the field we aim to provide orientation and background knowledge to researchers as they move forward in inquiring how artificial agents are used and perceived, and to further contribute to an ongoing discussion around appropriate experimental setups and measures.We argue that both when confronting participants with simple artificial agents or AI-driven bots, researchers should (1) scrutinize the specificity of measures which may indicate social as well as more general, non-social processes, (2) avoid deceptive cover stories which entail their own complications to data interpretation and (3) see value in understanding specific social-cognitive processes in interactions with artificial agents even when the most generalizable comparisons with human-human interactions may not be achieved in a specific experimental setup.
Keywords: Artifical Agents, Avatars, artificial intelligence, deception, Demand characteristics
Received: 22 Jan 2025; Accepted: 07 Jul 2025.
Copyright: © 2025 Rubo and Neumann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Marius Rubo, Division of Cognitive Psychology, Perception and Research Methods, Clinical Neuroscience Bern, University of Bern, Bern, Bern, Switzerland
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.