- 1Cognitive Psychology, Perception and Research Methods, Institute of Psychology, University of Bern, Bern, Switzerland
- 2Department of Clinical Psychology, and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany
Starting long before modern generative artificial intelligence (AI) tools became available, technological advances in constructing artificial agents spawned investigations into the extent to which interactions with such non-human counterparts bear resemblance to human-human-interactions. Although artificial agents are typically not ascribed a mind of their own in the same sense as humans, several researchers concluded that social presence with or social influence from artificial agents can resemble that seen in interactions with humans in important ways. Here we critically review claims about a comparability between human-agent interactions and human-human-interactions, outlining methodological approaches and challenges which predate the AI era but continue to influence work in the field. By connecting novel work on AI tools with broader research in the field we aim to provide orientation and background knowledge to researchers as they move forward in inquiring how artificial agents are used and perceived, and to further contribute to an ongoing discussion around appropriate experimental setups and measures. We argue that both when confronting participants with simple artificial agents or AI-driven bots, researchers should (1) scrutinize the specificity of measures which may indicate social as well as more general, non-social processes, (2) avoid deceptive cover stories which entail their own complications to data interpretation and (3) see value in understanding specific social-cognitive processes in interactions with artificial agents even when the most generalizable comparisons with human-human interactions may not be achieved in a specific experimental setup.
Introduction
When playing the game 20 questions, what information would suffice to conclude that the correct answer must be “a human being”? Clearly, positive responses to the questions “Is it bigger than a book?” and “Is it warm?” would not suffice. Likewise, after hearing that the looked-for entity “can increase the heart rate in nearby people” and “may cause frustration when being non-responsive,” one would likely not yet be ready to make a definite guess but would instead consider further information to inquire. More specific (albeit not necessarily unique) characteristics of humans to inquire are a capability for experience (the capacity to feel and sense, e.g., to experience pain, feel calm, or to be conscious; Gray et al., 2007; Malle, 2021; Weisman et al., 2017) or a fundamental need to form bonds with others (Baumeister and Leary, 2017).
A topic which has long captured the attention of philosophers and anthropologists, the characterization of human beings received a new perspective through the advancement of artificial humanoid agents (Schneider, 2019). In addition to debates on when artificial agents should be seen as equivalent to humans, it was now also inquired to what extent such agents are perceived by people as if they were human. Researchers can quickly point to a profound distinction: regardless of whether artificial agents are displayed as characters on a computer screen or represented as robots (Oh et al., 2018; Pan and Hamilton, 2018), they may be ascribed a capacity to act and do (i.e., to make choices, recognize others, or remember things; Gray et al., 2007; Malle, 2021; Weisman et al., 2017), but are typically not ascribed a capability for an own experience. In fact, people were even found to express unease at the idea of ascribing experience to artificial agents (Gray and Wegner, 2012, MacDorman, 2024).
Despite this striking discrepancy in the perception of humans and artificial agents, researchers observed that people can have a sense of “being with another” or social presence when being co-located with an artificial agent (Bailenson et al., 2005; Biocca et al., 2001). A wide range of other similarities between human-agent-interactions and human-human interactions were reported in a large number of studies, spanning across different fields and covered in separate review articles (e.g., Chugunova and Sele, 2022; Felnhofer et al., 2023; Capraro et al., 2024). For instance, Nass et al. (1999) observed an adherence to social norms in participants who interacted with computer-controlled agents; Eyssel and Hegel (2012) found participants to apply gender stereotypes to virtual agents; Slater et al. (2013) found participants to attempt to stop an attack by one virtual agent on another; Iachini et al. (2016) and Zibrek et al. (2020) observed similar inter-personal distances to artificial agents in virtual reality (VR) as one would expected toward real people; Gonzalez-Franco et al. (2018) found participants to help an artificial agent in avoiding electric shocks; Wienrich et al. (2018) observed a social inhibition of return effect toward artificial agents; Broadbent (2017) observed a willingness to disclose emotional experiences toward an artificial agent; Karhiy et al. (2024) reported comparable levels of perceived stress after a mindfulness intervention between a teletherapist and virtual human; Rubo and Gamer (2020) found a reciprocation of eye-contact with an artificial agent when immersed in VR and Krämer et al. (2013) observed a reciprocation of an agent’s smiling behavior; Dechant et al. (2017) and Rubo and Munsch (2023) found a relative aversion of eye-contact with an artificial agent in people higher on social anxiety. In addition, several studies reported physiological (Kiilavuori et al., 2021; Pan et al., 2012; Wieser et al., 2010) and neural (Caruana et al., 2015; Kompatsiari et al., 2021; von der Pütten et al., 2014; Schilbach et al., 2010) reactions to interacting with artificial agents which resembled reactions to interacting with humans.
A large number of studies also observed differences in responses to artificial agents compared to humans. For example, participants behaved more selfishly in human-machine interactions compared to human-human interactions (e.g., Cohn et al., 2022; Chugunova and Sele, 2022); socially anxious individuals in a study by Kang and Gratch (2010) revealed more intimate information about themselves when interacting with a virtual character compared to when interacting with a real human, possibly reflecting a reduced fear of evaluation in interactions with mindless agents. If information was provided by a human-controlled compared to a computer-controlled agent, participants showed better results in a post-study test and higher levels of physiological arousal (Okita et al., 2007), and acted differently when protecting team partners in a cooperative game (Merritt and McGee, 2012). Participants also exhibited stronger physiological arousal when interacting with human-controlled compared to computer-controlled agents (Lim and Reeves, 2010; Ravaja et al., 2006), stronger electrodermal reactions to eye contact with a person compared to with a human-controlled virtual character (Syrjämäki et al., 2020) and stronger (left occipitotemporal) N170 responses in response to gaze from a supposedly human-controlled compared to a computer-controlled agent (Caruana et al., 2016).
Notwithstanding observations of differences in reactions to artificial agents and humans, it was proposed that artificial agents may typically be perceived as human-like when they contingently respond to one’s actions and fulfill expectations that one has toward other humans (Skarbez et al., 2017; Slater, 2009). Even more generally, the Computers are Social Actors (CASA) framework poses that humans tend to react similarly to artificial agents as to other humans, and that processing of social cues occurs automatically and mindlessly both when a conspecific is a real human or an artificial agent (Nass and Moon, 2000; Nielsen et al., 2022; von der Pütten et al., 2010; Xu et al., 2022). Note that similar models were proposed in the domain of human-human interactions which are thought to often follow mindlessly applied heuristics (Social Heuristics Hypothesis, Rand et al., 2014, Capraro, 2024). A more nuanced view, the Threshold Model of Social Influence (Blascovich, 2002) postulates that human-controlled virtual characters (termed avatars) exert a higher social influence on humans than computer-controlled agents due to their higher degree of perceived agency (here defined as extent to which virtual characters are perceived as a vivid person).
Recent meta-analyses observed an overall heightened self-reported sense of social presence toward avatars compared to agents but no differences in behavioral responses (Fox et al., 2015; Felnhofer et al., 2023), seemingly allowing for an integration of both models. The aim of the present commentary, however, is not to summarize data with regards to their fit to proposed models. Instead, we outline general conceptual and methodological challenges in the field which underlie a wide range of studies and which, we will argue, require careful consideration when interpreting individual studies as well as meta-analytic summaries.
Research practices and methodological challenges
Several of the above-mentioned studies may raise the question as to what extent observed reactions are informative in evaluating a more general comparability of human-agent interactions with human-human interactions. In particular, one may object that recorded reactions in participants often captured some of the more mundane and fleeting aspects of social interactions such as politeness and mindless applications of gender stereotypes (e.g., Nass et al., 1994) but may have missed to address their more profound and meaningful levels. One may argue that stronger support for the CASA framework could come from observations of higher-level social-cognitive functions, such as an activation of social-evaluative processes, in interactions with virtual agents. Two particularly relevant lines of study here are experiments using the Cyberball paradigm (Hartgerink et al., 2015; Williams and Jarvis, 2006)—where participants are socially excluded in ball-tossing game—and the Trier Social Stress (TSST) (Allen et al., 2017; Kirschbaum et al., 1993)—where participants carry out a performance task in front of a committee (Fallon et al., 2016; Helminen et al., 2021). Both the Cyberball paradigm (Hartgerink et al., 2015; Jauch et al., 2022; Kothgassner et al., 2014; Zadro et al., 2004) and the TSST were carried out with virtual characters as conspecifics or as committee (Fallon et al., 2016; Helminen et al., 2021) and were reported to induce subjective and physiological stress responses even when participants were informed that agents were computer-controlled.
However, two types of methodological challenges may instill hesitation toward concluding that artificial agents can indeed trigger social-cognitive processes in a comparable manner as humans. Firstly, there is no agreement on the measures that should be used to assess social-cognitive processes (Sterna and Zibrek, 2021) and several frequently used measures may capture more general constructs. For instance, reactivity in heart rate or salivary cortisol levels (which are commonly observed in response to the TSST; Helminen et al., 2021) are not specific to social stress but can similarly be seen in response to non-social tasks (Bozovic et al., 2013; Taelman et al., 2010). Responses to questionnaires which are intended to represent social cognition are likewise not immune to capturing more general constructs. For instance, Cyberball studies commonly use as a main outcome response a scale which is designed to assess threats to fundamental human needs but may more generally capture a sense of aversion (toward being unable to participate in ball-throwing while having no other task; Gerber et al., 2017). Several items may more generally infer frustration (e.g., “I felt somewhat frustrated during the Cyberball game”). Rubo and Munsch (2023) observed that the meta-analytically described effects of the Cyberball paradigm (Hartgerink et al., 2015) are cut in half when excluding this scale. Other measures which are assessed in response to the Cyberball task may similarly not necessarily reflect social-cognitive processes toward artificial agents. For instance, when asking about feelings of anger, responses may again reflect frustration with the testing situation rather than with emotions toward the virtual agents per se.
Secondly, even when assessing variables which more clearly reflect social-cognitive processes toward others, it can become intricate to assess the extent to which they are caused by a virtual agent rather than any other persons involved in the testing situation. Note that experimenters may not only influence participants’ behavior when they are visible, but also when they are present in the room but invisible to participants (Gallup et al., 2019) or even when their presence is merely implied, e.g., when participants are aware that a recording of their behavior may be viewed at a later point (Garcia-Leal et al., 2005; Gobel et al., 2015; Jones et al., 1997). When comparing interactions with artificial agents to interactions with humans, researchers therefore often strive to remove such effects by exposing all participants to the same form of interaction—which typically involves no contact with another person—and creating experimental manipulations merely by means of varying instructions given to participants. Specifically, participants in a range of experiments interacted with computer-controlled artificial agents after either receiving veridical information about the situation or after being erroneously informed that the agents were controlled by humans (Hartgerink et al., 2015; Jauch et al., 2022; Kothgassner et al., 2017; Syrjämäki et al., 2020). Such deceptive instructions were used in 19 out of 20 studies included in a recent meta-analysis on social responses toward computer-controlled agents and (supposed) human-controlled avatars (Felnhofer et al., 2023). Deceptive or misleading cover stories are sometimes complemented with additional steps to increase their credibility and reduce questioning by participants. For example, participants may be presented with supposed confederates controlling the avatar (Okita et al., 2007), experimenters leaving the room to check up with the confederate supposedly controlling the virtual character (Caruana et al., 2017), showing a supposedly live footage of the confederate (Neumann et al., 2023), waiting for another participant to control the virtual character (Weibel et al., 2008), and instructions providing explanations on how the other person could control the avatar (Caruana et al., 2017; Lucas et al., 2019).
Note how interpretations of research outcomes can hinge on participants’ belief in such cover stories. While comparable behavior toward agents and avatars (Fox et al., 2015; Felnhofer et al., 2023) can be interpreted to align with CASA if assuming that participants trust the cover story (indicating that social behavior is largely automatic and applied mindlessly even toward artificial agents), participants’ suspicion in the deception (Davidson et al., 2019)—which is only applied in one of the two conditions—may effectively eliminate the difference between the two conditions when participants are similarly aware of their interacting with artificial agents in both conditions. More so, deception can introduce its own measurement biases which can vary between participants (Hertwig and Ortmann, 2008; Kelman, 2017). While some participants who detect deceit may correspondingly act toward artificial agents as being mindless entities, others may nonetheless act according to what they believe is expected from them. The presence of such demand characteristics (Nichols and Maner, 2008; Lush, 2020) may more strongly influence explicit responses—thus explaining why self-reported social presence toward avatars was often higher compared to toward agents (Fox et al., 2015; Felnhofer et al., 2023)—although they can also affect more implicit measures (Vecchione et al., 2016). In addition, some participants may experience negative emotional reactions to detecting deceit (Walczyk and Newman, 2020).
It is often difficult to assess the proportion of participants who harbored suspicion toward a cover story since the use of standardized manipulation checks are rare in the field. Some studies do not inquire belief in the manipulation (e.g., Appel et al., 2012; Kothgassner et al., 2014; Lim and Reeves, 2010; Lucas et al., 2019), while other studies inquire participants’ understanding of the proposed situation but do not directly test belief in the cover story (Kothgassner et al., 2019). Even when the manipulation succeeds in eliciting statistically significant differences in how the interaction with the virtual character is perceived, belief in the manipulation may only be moderately strong (e.g., Felnhofer et al., 2018). Only in few studies were participants proactively approached by experimenters if they seemed suspicious about the cover story (Jauch et al., 2022; Neumann et al., 2023; Weibel et al., 2008).
Note that methodological problems which arise from falsely introducing computer-controlled agents as human-controlled avatars can similarly appear when falsely introducing human-controlled avatars as computer-controlled agents. In the case of TSST setups which are carried out with artificial agents as committee, researchers again removed all other humans from the testing room and informed participants that they were interacting with artificial agents in solitude. In reality, experimenters were still listening to participants while hiding in another room or behind a one-way mirror (Helminen et al., 2021), secretly controlling the agent’s behavior in what is referred to as a wizard of oz. setup (Pan and Hamilton, 2018). Here again, attributions of social stress in participants to their interactions with artificial agents hinges on their belief in an erroneous cover story.
Considerations for future research
In the light of methodological challenges, we propose guidelines for future comparisons between interactions with computer-controlled artificial agents and humans or human-controlled avatars. Firstly, as argued in other fields within the behavioral sciences, deceptive cover stories should be avoided when possible as they can impair experimental control and undermine participants’ trust in psychological experimentation in the long run (Hertwig and Ortmann, 2008; Kelman, 2017). Researchers may argue that some questions cannot be addressed without the use of deception (Weiss, 2001). For instance, if the goal is to compare reactions to artificial agents and humans while holding all physical stimuli equal, one may quickly find oneself designing experiments where participants in all conditions are confronted with computer-controlled agents and receive diverging instructions about the character of the interaction in different conditions. While such setups may continue to have their place in the field—and may more explicitly explore and attenuate confounding effects resulting from deception—it may be noteworthy that alternatives exist. Specifically, research with no use of deception (i.e., comparing interactions with humans who are truthfully introduced as humans to interactions with artificial agents which are likewise truthfully introduced as what they are) need not be seen as methodologically inferior. Several authors have comprehensively outlined how such investigations, which assess unadulterated cognitive processes as they occur naturally in specific contexts, can be particularly informative for constructing cognitive theories (Beller et al., 2012; Hutchins, 2010; Kingstone et al., 2008; Miller et al., 2019). For such comparisons, researchers might more strongly incorporate observations from outside of the laboratory—such as when (anonymously) logging interactions with service robots or agents in computer simulations in comparison to similarly structured interactions with humans—in order to mitigate influences of demand characteristics and other experimenter effects (Nichols and Maner, 2008).
While interactions among humans are often characterized by their repeating occurrence (e.g., in the family or among colleagues), and problematic real-life phenomena such as ostracism likewise tend to span over larger time periods (Riva et al., 2016), research comparing effects of interacting with humans and artificial agents more commonly assessed relatively short-term consequences (e.g., mood, behavior, and physiological reactions during and within an hour after an experiment; Felnhofer et al., 2023; Fox et al., 2015). Future research may profit from more strongly incorporating a longer-term perspective when assessing the extent to which artificial agents are perceived similarly as humans. In particular, to test the hypothesis of a more general equalization of the perception of artificial agents with humans, researchers could test if people can form attachments to artificial agents similarly as to other humans (Levy et al., 2010) or if people can satisfy their need for longer-term bonds (Baumeister and Leary, 2017) in interactions with artificial agents. Considering that building a long-term relationship takes time for bonds and trust to develop, key factors should be considered regarding adapting the behavior of the virtual character to the current closeness and status of the relationship, such as memory of previous encounters (Kasap and Magnenat-Thalmann, 2012), but also social attitudes (Ben Youssef et al., 2015) and personal self-disclosure (Wu et al., 2024).
Researchers who aim to assess the comparability of interactions with humans and artificial agents in the most general sense (e.g., assessing the general magnitude of social influence from them; Fox et al., 2015; Felnhofer et al., 2023) will continue to be tasked with defining a set of outcome variables which is needed for such general considerations. Similarly as in the 20 questions game, skeptics may remain doubtful if the acquired information suffices to justify a definite guess, or if instead others outcome variables may still need to be considered. Note that the metaphor of the 20 question game was used in a discussion on whether researchers should even see themselves as engaging in such a game when carrying out research (i.e., aiming at a definite model which unifies all observations), or whether it may be more fruitful to accept certain levels of multiplicity when unification is out of sight (Newell, 1973). Similarly, other authors have since highlighted the value of understanding individual phenomena as they occur in specific situations, encouraging researchers to refrain from obstinately aiming for unified theories (Shapiro, 2009; Skarbez et al., 2021). Researchers may likewise investigate a variety of individual questions such as the utility of artificial agents in social skills training (Howard and Gutworth, 2020) and the service industry (Pelau et al., 2021) or the characteristics which make artificial agents desirable to interact with (Hildt, 2021)—without directly aiming at testing a more general equitability of artificial agents with humans.
Moving forward in the AI era
While interactions with artificial agents were investigated since the 1960s (Agassi and Wiezenbaum, 1976), advances in generative AI, in particular large language models (LLMs) such as ChatGPT which allow for rich text-based interactions, stimulated a new and ongoing wave of research in a wide range of fields (Carlbring et al., 2023; Rebelo et al., 2023; Hudecek et al., 2024). For instance, AI tools are expected to augment or monitor processes in health care while allowing professionals to spend more time with patients, effectively alleviating personnel undersupply (Augurzky and Kolodziej, 2018; Rabbitt et al., 2015). Some studies in the field focused on the usefulness of such chatbots as an interactive text-based knowledge resource (Wester et al., 2024), but several studies also tapped into the question of how human-like AI chatbots are perceived. It was observed that AI chatbots may be rated as relatively human-like in their role as therapist during counseling sessions (Vowels et al., 2024), but interacting with them may also require an acclimatization period for users (Araujo and Bol, 2024). In addition, a chatbot’s expressions of empathy may be perceived as inauthentic (Seitz, 2024) and people do not trust AI chatbots in the same sense as other humans (Montag et al., 2024).
Although the introduction of AI technology to artificial agents constitutes a groundbreaking advancement in the field, investigations into perceptions of and reactions to such chatbots may profit from research practices and experiences gained in research predating the AI era. In particular, while interactions with AI chatbots are rarely contrasted with human-human interactions in direct juxtaposition, past investigations into perceptions of artificial agents (with no AI capabilities) often incorporated experimental variations to allow for the most direct comparison between human-agent and human-human interactions (Felnhofer et al., 2023). Research on interactions with AI chatbots may incorporate similar experimental setups in order to collect informative data on the perceived humanness of such agents.
Since interactions with modern AI can closely resemble human-human interactions in their content but may also confront participants with misinformation beyond experimenters’ control, it was argued on ethical grounds that participants should be consistently made aware when they interact with AI agents as opposed to human counterparts (Piñeiro-Martín et al., 2023; Tabassum et al., 2025)—a guideline which prohibits the implementation of deceptive cover stories. Note, however, that experimental researchers using AI tools may more easily abandon the practice of misleading participants into thinking that a computer-driven agent was human-controlled: while this technique sometimes appeared necessary to provoke reactions in participants when computer-controlled agents lacked capabilities to display human-like behavior, AI chatbots may exhibit sufficient interactional realism to elicit complex social reactions even when transparently introduced to participants as artificial agents. In addition to previous research, reactions to AI chatbots may then be tested on a wider range of measures rooted in psychological theory including higher-level social processes such as reactions to social evaluation (Allen et al., 2017) or the need to be sensed and attended to by others (Baumeister and Leary, 2017) as well as physiological (Syrjämäki et al., 2020) and neural (Caruana et al., 2016) reactions to social encounters.
Importantly, researchers investigating reactions to AI chatbots may take up and continue the discussions around criteria by which the perceived humanness of AI agents should be evaluated (Rubo and Munsch, 2023). Modern technology may furthermore help to avoid experimenter effects which may have influenced previous laboratory research on reactions to artificial agents (Gallup et al., 2019) since AI tools can now be more easily disseminated to participants’ smartphones and tested in everyday situations with less experimenter influence. While AI will continue to enrich and stimulate a range of research fields, it can also allow for more rigorous basic research into human social processes by profiting from and extending on past research conducted before the AI era.
Conclusion
The goal to compare people’s perception of artificial agents with the perception of other humans has inspired a considerable amount of research and unearthed a range of interesting findings, several of which were taken to suggest a comparability of the two. The field was also confronted with major challenges: firstly, it proved difficult to specify what type of results would imply a more general comparability between the perception of humans and artificial agents. Secondly, it remained difficult to thoroughly remove contamination of experimenter and demand characteristics effects from observations. We suggest that future research may avoid the use of deceptive cover stories and the presence of experimenters by investigating interactions in more natural environments outside of laboratory setups. Assessed outcomes may more strongly extend toward longer-term phenomena such as the development of interpersonal bonds. We argue that researchers need not necessarily strive for the most global comparisons but may focus on understanding individual facets of interactions between humans and artificial agents. Moving to a new era in the field where artificial agents can be endowed with the capability for naturalistic participation in interactions using AI models, researchers can significantly enhance our understanding of social-cognitive processes in interactions with artificial agents, both drawing on and extending research practices from more traditional work in the field.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
MR: Funding acquisition, Writing – original draft, Writing – review & editing. IN: Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by the Swiss National Science Foundation (SNSF, Grant Number PZ00P1_208909).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Agassi, J., and Wiezenbaum, J. (1976). Computer power and human reason: from judgment to calculation. Technol. Cult. 17:813. doi: 10.2307/3103715
Allen, A. P., Kennedy, P. J., Dockray, S., Cryan, J. F., Dinan, T. G., and Clarke, G. (2017). The trier social stress test: principles and practice. Neurobiol. Stress 6, 113–126. doi: 10.1016/j.ynstr.2016.11.001
Appel, J., von der Pütten, A., Krämer, N. C., and Gratch, J. (2012). Does humanity matter? Analyzing the importance of social cues and perceived agency of a computer system for the emergence of social reactions during human-computer interaction. Adv. Hum. Comput. Interact. 2012:324694. doi: 10.1155/2012/324694
Araujo, T., and Bol, N. (2024). From speaking like a person to being personal: the effects of personalized, regular interactions with conversational agents. Comput. Hum. Behav. Artif. Hum. 2:100030. doi: 10.1016/j.chbah.2023.100030
Augurzky, B., and Kolodziej, I. (2018). Fachkräftebedarf im Gesundheits- und Sozialwesen 2030: Gutachten im Auftrag des Sachverständigenrates zur Begutachtung der Gesamtwirtschaftlichen Entwicklung (Working Paper No. 06/2018). Sachverständigenrat zur Begutachtung der Gesamtwirtschaftlichen Entwicklung, Wiesbaden. Available online at: https://www.econstor.eu/bitstream/10419/184864/1/1040678963.pdf.
Bailenson, J. N., Swinth, K., Hoyt, C., Persky, S., Dimov, A., and Blascovich, J. (2005). The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence Teleoperat. Virt. Environ. 14, 379–393. doi: 10.1162/105474605774785235
Baumeister, R. F., and Leary, M. R. (2017). “The need to belong: desire for interpersonal attachments as a fundamental human motivation” in Interpersonal development (Oxfordshire, UK: Routledge), 57–89.
Beller, S., Bender, A., and Medin, D. L. (2012). Should anthropology be part of cognitive science? Top. Cogn. Sci. 4, 342–353. doi: 10.1111/j.1756-8765.2012.01196.x
Ben Youssef, A., Chollet, M., Jones, H., Sabouret, N., Pelachaud, C., and Ochs, M. (2015). Towards a socially adaptive virtual agent. In Proceedings of the Intelligent virtual agents: 15th international conference, IVA 2015, Delft, the Netherlands, August 26–28, 2015, 15 (pp. 3–16). Springer International Publishing.
Biocca, F., Harms, C., and Gregg, J. (2001). “The networked minds measure of social presence: pilot test of the factor structure and concurrent validity” in 4th annual international workshop on presence (Philadelphia, PA), 1–9.
Blascovich, J. A theoretical model of social influence for increasing the utility of collaborative virtual environments. 4th International Conference on Collaborative Virtual Environments, (2002), 25–30.
Bozovic, D., Racic, M., and Ivkovic, N. (2013). Salivary cortisol levels as a biological marker of stress reaction. Med. Arch. 67, 374–377. doi: 10.5455/medarh.2013.67.374-377
Broadbent, E. (2017). Interactions with robots: the truths we reveal about ourselves. Annu. Rev. Psychol. 68, 627–652. doi: 10.1146/annurev-psych-010416-043958
Capraro, V. (2024). The dual-process approach to human sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation. J. Pers. Soc. Psychol. 126, 719–757. doi: 10.1037/pspa0000375
Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., et al. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus 3:pgae191. doi: 10.1093/pnasnexus/pgae191
Carlbring, P., Hadjistavropoulos, H., Kleiboer, A., and Andersson, G. (2023). A new era in internet interventions: the advent of chat-GPT and AI-assisted therapist guidance. Internet Interv. 32:100621. doi: 10.1016/j.invent.2023.100621
Caruana, N., Brock, J., and Woolgar, A. (2015). A frontotemporoparietal network common to initiating and responding to joint attention bids. NeuroImage 108, 34–46. doi: 10.1016/j.neuroimage.2014.12.041
Caruana, N., de Lissa, P., and McArthur, G. (2016). Beliefs about human agency influence the neural processing of gaze during joint attention. Soc. Neurosci. 12, 194–206. doi: 10.1080/17470919.2016.1160953
Caruana, N., Spirou, D., and Brock, J. (2017). Human agency beliefs influence behaviour during virtual social interactions. PeerJ 5:e3819. doi: 10.7717/peerj.3819
Chugunova, M., and Sele, D. (2022). We and it: an interdisciplinary review of the experimental evidence on how humans interact with machines. J. Behav. Exp. Econ. 99:101897. doi: 10.1016/j.socec.2022.101897
Cohn, A., Gesche, T., and Maréchal, M. A. (2022). Honesty in the digital age. Manag. Sci. 68, 827–845. doi: 10.1287/mnsc.2021.3985
Davidson, C. A., Willner, C. J., Van Noordt, S. J., Banz, B. C., Wu, J., Kenney, J. G., et al. (2019). One-month stability of cyberball post-exclusion ostracism distress in adolescents. J. Psychopathol. Behav. Assess. 41, 400–408. doi: 10.1007/s10862-019-09723-4
Dechant, M., Trimpl, S., Wolff, C., Mühlberger, A., and Shiban, Y. (2017). Potential of virtual reality as a diagnostic tool for social anxiety: a pilot study. Comput. Hum. Behav. 76, 128–134. doi: 10.1016/j.chb.2017.07.005
Eyssel, F., and Hegel, F. (2012). (S) He’s got the look: gender stereotyping of robots 1. J. Appl. Soc. Psychol. 42, 2213–2230. doi: 10.1111/j.1559-1816.2012.00937.x
Fallon, M. A., Careaga, J. S., Sbarra, D. A., and O’Connor, M.-F. (2016). Utility of a virtual trier social stress test: initial findings and benchmarking comparisons. Psychosom. Med. 78, 835–840. doi: 10.1097/PSY.0000000000000338
Felnhofer, A., Kafka, J. X., Hlavacs, H., Beutl, L., Kryspin-Exner, I., and Kothgassner, O. D. (2018). Meeting others virtually in a day-to-day setting: investigating social avoidance and prosocial behavior towards avatars and agents. Comput. Hum. Behav. 80, 399–406. doi: 10.1016/j.chb.2017.11.031
Felnhofer, A., Knaust, T., Weiss, L., Goinska, K., Mayer, A., and Kothgassner, O. D. (2023). A virtual character’s agency affects social responses in immersive virtual reality: a systematic review and meta-analysis. Int. J. Hum. Comput. Interact. 40, –16. doi: 10.1080/10447318.2023.2209979
Fox, J., Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y., and Bailenson, J. N. (2015). Avatars versus agents: a meta-analysis quantifying the effect of agency on social influence. Hum. Comput. Interact. 30, 401–432. doi: 10.1080/07370024.2014.921494
Gallup, A. C., Vasilyev, D., Anderson, N., and Kingstone, A. (2019). Contagious yawning in virtual reality is affected by actual, but not simulated, social presence. Sci. Rep. 9:294. doi: 10.1038/s41598-018-36570-2
Garcia-Leal, C., Parente, A. C., Del-Ben, C. M., Guimarães, F. S., Moreira, A. C., Elias, L. L. K., et al. (2005). Anxiety and salivary cortisol in symptomatic and nonsymptomatic panic patients and healthy volunteers performing simulated public speaking. Psychiatry Res. 133, 239–252. doi: 10.1016/j.psychres.2004.04.010
Gerber, J., Chang, S.-H., and Reimel, H. (2017). Construct validity of Williams’ ostracism needs threat scale. Personal. Individ. Differ. 115, 50–53. doi: 10.1016/j.paid.2016.07.008
Gobel, M. S., Kim, H. S., and Richardson, D. C. (2015). The dual function of social gaze. Cognition 136, 359–364. doi: 10.1016/j.cognition.2014.11.040
Gonzalez-Franco, M., Slater, M., Birney, M. E., Swapp, D., Haslam, S. A., and Reicher, S. D. (2018). Participant concerns for the learner in a virtual reality replication of the milgram obedience study. PLoS One 13:e0209704. doi: 10.1371/journal.pone.0209704
Gray, H. M., Gray, K., and Wegner, D. M. (2007). Dimensions of mind perception. Science 315:619. doi: 10.1126/science.1134475
Gray, K., and Wegner, D. M. (2012). Feeling robots and human zombies: mind perception and the uncanny valley. Cognition 125, 125–130. doi: 10.1016/j.cognition.2012.06.007
Hartgerink, C. H. J., van Beest, I., Wicherts, J. M., and Williams, K. D. (2015). The ordinal effects of ostracism: a meta-analysis of 120 cyberball studies. PLoS One 10:e0127002. doi: 10.1371/journal.pone.0127002
Helminen, E. C., Morton, M. L., Wang, Q., and Felver, J. C. (2021). Stress reactivity to the trier social stress test in traditional and virtual environments: a meta-analytic comparison. Psychosom. Med. 83, 200–211. doi: 10.1097/PSY.0000000000000918
Hertwig, R., and Ortmann, A. (2008). Deception in experiments: revisiting the arguments in its defense. Ethics Behav. 18, 59–92. doi: 10.1080/10508420701712990
Hildt, E. (2021). What sort of robots do we want to interact with? Reflecting on the human side of human-artificial intelligence interaction. Front. Comput. Sci. 3:671012. doi: 10.3389/fcomp.2021.671012
Howard, M. C., and Gutworth, M. B. (2020). A meta-analysis of virtual reality training programs for social skill development. Comput. Educ. 144:103707. doi: 10.1016/j.compedu.2019.103707
Hudecek, M. F., Lermer, E., Gaube, S., Cecil, J., Heiss, S. F., and Batz, F. (2024). Fine for others but not for me: the role of perspective in patients’ perception of artificial intelligence in online medical platforms. Comput. Hum. Behav. Artif. Hum. 2:100046. doi: 10.1016/j.chbah.2024.100046
Hutchins, E. (2010). Cognitive ecology. Top. Cogn. Sci. 2, 705–715. doi: 10.1111/j.1756-8765.2010.01089.x
Iachini, T., Coello, Y., Frassinetti, F., Senese, V. P., Galante, F., and Ruggiero, G. (2016). Peripersonal and interpersonal space in virtual and real environments: effects of gender and age. J. Environ. Psychol. 45, 154–164. doi: 10.1016/j.jenvp.2016.01.004
Jauch, M., Rudert, S. C., and Greifeneder, R. (2022). Social pain by non-social agents: exclusion hurts and provokes punishment even if the excluding source is a computer. Acta Psychol. 230:103753. doi: 10.1016/j.actpsy.2022.103753
Jones, D. A., Rollman, G. B., and Brooke, R. I. (1997). The cortisol response to psychological stress in temporomandibular dysfunction. Pain 72, 171–182. doi: 10.1016/s0304-3959(97)00035-3
Kang, S.-H., and Gratch, J. (2010). Virtual humans elicit socially anxious interactants’ verbal self-disclosure. Comput. Anim. Virtual Worlds 21, 473–482. doi: 10.1002/cav.345
Karhiy, M., Sagar, M., Antoni, M., Loveys, K., and Broadbent, E. (2024). Can a virtual human increase mindfulness and reduce stress? A randomised trial. Comput. Hum. Behav. Artif. Hum. 2:100069. doi: 10.1016/j.chbah.2024.100069
Kasap, Z., and Magnenat-Thalmann, N. (2012). Building long-term relationships with virtual and robotic characters: the role of remembering. Vis. Comput. 28, 87–97. doi: 10.1007/s00371-011-0630-7
Kelman, H. C. (2017). “Human use of human subjects: the problem of deception in social psychological experiments” in Research design (Oxfordshire, UK: Routledge), 189–204.
Kiilavuori, H., Sariola, V., Peltola, M. J., and Hietanen, J. K. (2021). Making eye contact with a robot: psychophysiological responses to eye contact with a human and with a humanoid robot. Biol. Psychol. 158:107989. doi: 10.1016/j.biopsycho.2020.107989
Kingstone, A., Smilek, D., and Eastwood, J. D. (2008). Cognitive ethology: a new approach for studying human cognition. Br. J. Psychol. 99, 317–340. doi: 10.1348/000712607x251243
Kirschbaum, C., Pirke, K.-M., and Hellhammer, D. H. (1993). The “trier social stress test” a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology 28, 76–81. doi: 10.1159/000119004
Kompatsiari, K., Bossi, F., and Wykowska, A. (2021). Eye contact during joint attention with a humanoid robot modulates oscillatory brain activity. Soc. Cogn. Affect. Neurosci. 16, 383–392. doi: 10.1093/scan/nsab001
Kothgassner, O. D., Goreis, A., Kafka, J. X., Kaufmann, M., Atteneder, K., Beutl, L., et al. (2019). Virtual social support buffers stress response: an experimental comparison of real-life and virtual support prior to a social stressor. J. Behav. Ther. Exp. Psychiatry 63, 57–65. doi: 10.1016/j.jbtep.2018.11.003
Kothgassner, O., Griesinger, M., Kettner, K., Wayan, K., Völkl-Kernstock, S., Hlavacs, H., et al. (2017). Real-life prosocial behavior decreases after being socially excluded by avatars, not agents. Comput. Human Behav. 70, 261–269. doi: 10.1016/j.chb.2016.12.059
Kothgassner, O., Kafka, J. X., Rudyk, J., Beutl, L., Hlavacs, H., and Felnhofer, A. (2014). Does social exclusion hurt virtually like it hurts in real-life? The role of agency and social presence in the perception and experience of social exclusion. Proc. Int. Soc. Pres. Res. 15, 45–56.
Krämer, N., Kopp, S., Becker-Asano, C., and Sommer, N. (2013). Smile and the world will smile with you—the effects of a virtual agent‘s smile on users’ evaluation and behavior. Int. J. Hum.-Comput. Stud. 71, 335–349. doi: 10.1016/j.ijhcs.2012.09.006
Levy, K. N., Ellison, W. D., Scott, L. N., and Bernecker, S. L. (2010). Attachment style. J. Clin. Psychol. 67, 193–203. doi: 10.1002/jclp.20756
Lim, S., and Reeves, B. (2010). Computer agents versus avatars: responses to interactive game characters controlled by a computer or other player. Int. J. Hum.-Comput. Stud. 68, 57–68. doi: 10.1016/j.ijhcs.2009.09.008
Lucas, G. M., Lehr, J., Krämer, N., and Gratch, J. (2019). The effectiveness of social influence tactics when used by a virtual agent. Paris, France: In Proceedings of the 19th ACM international conference on intelligent virtual agents (pp. 22–29).
Lush, P. (2020). Demand characteristics confound the rubber hand illusion. Collabra Psychol. 6:22. doi: 10.1525/collabra.325
MacDorman, K. F. (2024). Does mind perception explain the uncanny valley? A meta-regression analysis and (de) humanization experiment. Comput. Hum. Behav. Artif. Hum. 2:100065. doi: 10.1016/j.chbah.2024.100065
Malle, B. F. (2021). What the mind is. Nat. Hum. Behav. 5, 1269–1270. doi: 10.1038/s41562-021-01183-9
Merritt, T., and McGee, K. (2012). Protecting artificial team-mates: more seems like less. Texas, Austin, United States: In Proceedings of the SIGCHI conference on human factors in computing systems, 2793–2802.
Miller, L. C., Shaikh, S. J., Jeong, D. C., Wang, L., Gillig, T. K., Godoy, C. G., et al. (2019). Causal inference in generalizable environments: systematic representative design. Psychol. Inq. 30, 173–202. doi: 10.1080/1047840x.2019.1693866
Montag, C., Becker, B., and Li, B. J. (2024). On trust in humans and trust in artificial intelligence: a study with samples from Singapore and Germany extending recent research. Comput. Hum. Behav. Artif. Hum. 2:100070. doi: 10.1016/j.chbah.2024.100070
Nass, C., and Moon, Y. (2000). Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103. doi: 10.1111/0022-4537.00153
Nass, C., Moon, Y., and Carney, P. (1999). Are people polite to computers? Responses to computer-based interviewing Systems1. J. Appl. Soc. Psychol. 29, 1093–1109. doi: 10.1111/j.1559-1816.1999.tb00142.x
Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. Boston, Massachusetts, United States: Proceedings of the SIGCHI conference on human factors in computing systems, 72–78.
Neumann, I., Käthner, I., Gromer, D., and Pauli, P. (2023). Impact of perceived social support on pain perception in virtual reality. Comput. Hum. Behav. 139:107490. doi: 10.1016/j.chb.2022.107490
Newell, A. (1973). “You can’t play 20 questions with nature and win: projective comments on the papers of this symposium” in Machine intelligence (Oxfordshire, UK: Routledge), 121–146.
Nichols, A. L., and Maner, J. K. (2008). The good-subject effect: investigating participant demand characteristics. J. Gen. Psychol. 135, 151–166. doi: 10.3200/genp.135.2.151-166
Nielsen, Y. A., Pfattheicher, S., and Keijsers, M. (2022). Prosocial behavior toward machines. Curr. Opin. Psychol. 43, 260–265. doi: 10.1016/j.copsyc.2021.08.004
Oh, C. S., Bailenson, J. N., and Welch, G. F. (2018). A systematic review of social presence: definition, antecedents, and implications. Front. Robot. AI 5:114. doi: 10.3389/frobt.2018.00114
Okita, S. Y., Bailenson, J., and Schwartz, D. L. (2007). The mere belief of social interaction improves learning. Austin, Texas, United States: Proceedings of the annual meeting of the cognitive science society, 1355–1360
Pan, X., Gillies, M., Barker, C., Clark, D. M., and Slater, M. (2012). Socially anxious and confident men interact with a forward virtual woman: an experimental study. PLoS One 7:e32931. doi: 10.1371/journal.pone.0032931
Pan, X., and Hamilton, A. (2018). Why and how to use virtual reality to study human social interaction: the challenges of exploring a new research landscape. Br. J. Psychol. 109, 395–417. doi: 10.1111/bjop.12290
Pelau, C., Dabija, D.-C., and Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput. Hum. Behav. 122:106855. doi: 10.1016/j.chb.2021.106855
Piñeiro-Martín, A., García-Mateo, C., Docío-Fernández, L., and López-Pérez, M. d. C. (2023). Ethical challenges in the development of virtual assistants powered by large language models. Electronics 12:3170. doi: 10.3390/electronics12143170
Rabbitt, S. M., Kazdin, A. E., and Scassellati, B. (2015). Integrating socially assistive robotics into mental healthcare interventions: applications and recommendations for expanded use. Clin. Psychol. Rev. 35, 35–46. doi: 10.1016/j.cpr.2014.07.001
Rand, D. G., Peysakhovich, A., Kraft-Todd, G. T., Newman, G. E., Wurzbacher, O., Nowak, M. A., et al. (2014). Social heuristics shape intuitive cooperation. Nat. Commun. 5:3677. doi: 10.1038/ncomms4677
Ravaja, N., Saari, T., Turpeinen, M., Laarni, J., Salminen, M., and Kivikangas, M. (2006). Spatial presence and emotions during video game playing: does it matter with whom you play? Presence Teleoperat. Virt. Environ. 15, 381–392. doi: 10.1162/pres.15.4.381
Rebelo, A. D., Verboom, D. E., dos Santos, N. R., and de Graaf, J. W. (2023). The impact of artificial intelligence on the tasks of mental healthcare workers: a scoping review. Comput. Hum. Behav. Artif. Hum. 1:100008. doi: 10.1016/j.chbah.2023.100008
Riva, P., Montali, L., Wirth, J. H., Curioni, S., and Williams, K. D. (2016). Chronic social exclusion and evidence for the resignation stage. J. Soc. Pers. Relat. 34, 541–564. doi: 10.1177/0265407516644348
Rubo, M., and Gamer, M. (2020). Stronger reactivity to social gaze in virtual reality compared to a classical laboratory environment. Br. J. Psychol. 112, 301–314. doi: 10.1111/bjop.12453
Rubo, M., and Munsch, S. (2023). Social stress in an interaction with artificial agents in virtual reality: effects of ostracism and underlying psychopathology. Comput. Human Behav. :107915. doi: 10.1016/j.chb.2023.107915
Schilbach, L., Wilms, M., Eickhoff, S. B., Romanzetti, S., Tepest, R., Bente, G., et al. (2010). Minds made for sharing: initiating joint attention recruits reward-related neurocircuitry. J. Cogn. Neurosci. 22, 2702–2715. doi: 10.1162/jocn.2009.21401
Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton, New Jersey, United States: Princeton University Press.
Seitz, L. (2024). Artificial empathy in healthcare chatbots: does it feel authentic? Comput. Hum. Behav. Artif. Hum. 2:100067. doi: 10.1016/j.chbah.2024.100067
Shapiro, I. (2009). The flight from reality in the human sciences. Princeton, New Jersey, United States: Princeton University Press.
Skarbez, R., Neyret, S., Brooks, F. P., Slater, M., and Whitton, M. C. (2017). A psychophysical experiment regarding components of the plausibility illusion. IEEE Trans. Vis. Comput. Graph. 23, 1369–1378. doi: 10.1109/tvcg.2017.2657158
Skarbez, R., Whitton, M., and Smith, M. (2021). Mixed reality doesn’t need standardized evaluation methods. CHI 2021 workshop-evaluating user experiences in mixed reality.
Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B Biol. Sci. 364, 3549–3557. doi: 10.1098/rstb.2009.0138
Slater, M., Rovira, A., Southern, R., Swapp, D., Zhang, J. J., Campbell, C., et al. (2013). Bystander responses to a violent incident in an immersive virtual environment. PLoS One 8:e52766. doi: 10.1371/journal.pone.0052766
Sterna, R., and Zibrek, K. (2021). Psychology in virtual reality: toward a validated measure of social presence. Front. Psychol. 12:705448. doi: 10.3389/fpsyg.2021.705448
Syrjämäki, A. H., Isokoski, P., Surakka, V., Pasanen, T. P., and Hietanen, J. K. (2020). Eye contact in virtual reality- a psychophysiological study. Comput. Human Behav. 112:106454. doi: 10.1016/j.chb.2020.106454
Tabassum, A., Elmahjub, E., Padela, A. I., Zwitter, A., and Qadir, J. (2025). Generative AI and the metaverse: a scoping review of ethical and legal challenges. IEEE Open J. Comput. Soc. 6, 348–359. doi: 10.1109/OJCS.2025.3536082
Taelman, J., Vandeput, S., Vlemincx, E., Spaepen, A., and Van Huffel, S. (2010). Instantaneous changes in heart rate regulation due to mental load in simulated office work. Eur. J. Appl. Physiol. 111, 1497–1505. doi: 10.1007/s00421-010-1776-0
Vecchione, M., Dentale, F., Alessandri, G., Imbesi, M. T., Barbaranelli, C., and Schnabel, K. (2016). On the applicability of the big five implicit association test in organizational settings. Curr. Psychol. 36, 665–674. doi: 10.1007/s12144-016-9455-x
von der Pütten, A. M., Krämer, N. C., Gratch, J., and Kang, S.-H. (2010). “It doesn’t matter what you are!” explaining social effects of agents and avatars. Comput. Human Behav. 26, 1641–1650. doi: 10.1016/j.chb.2010.06.012
von der Pütten, A. M., Schulte, F. P., Eimler, S. C., Sobieraj, S., Hoffmann, L., Maderwald, S., et al. (2014). Investigations on empathy towards humans and robots using fMRI. Comput. Hum. Behav. 33, 201–212. doi: 10.1016/j.chb.2014.01.004
Vowels, L. M., Francois-Walcott, R. R., and Darwiche, J. (2024). AI in relationship counselling: evaluating ChatGPT’s therapeutic capabilities in providing relationship advice. Comput. Hum. Beha. Artif. Hum. 2:100078. doi: 10.1016/j.chbah.2024.100078
Walczyk, J. J., and Newman, D. (2020). Understanding reactions to deceit. New Ideas Psychol. 59:100784. doi: 10.1016/j.newideapsych.2020.100784
Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., and Groner, R. (2008). Playing online games against computer-vs. human-controlled opponents: effects on presence, flow, and enjoyment. Comput. Hum. Behav. 24, 2274–2291. doi: 10.1016/j.chb.2007.11.002
Weisman, K., Dweck, C. S., and Markman, E. M. (2017). Rethinking people’s conceptions of mental life. Proc. Natl. Acad. Sci. 114, 11374–11379. doi: 10.1073/pnas.1704347114
Weiss, D. J. (2001). Deception by researchers is necessary and not necessarily evil. Behav. Brain Sci. 24, 431–432. doi: 10.1017/s0140525x01544143
Wester, J., De Jong, S., Pohl, H., and Van Berkel, N. (2024). Exploring people’s perceptions of LLM-generated advice. Comput. Hum. Behav. Artif. Hum. 2:100072. doi: 10.1016/j.chbah.2024.100072
Wienrich, C., Gross, R., and Kretschmer, F., & G. Muller-Plath (2018). Developing and proving a framework for reaction time experiments in VR to objectively measure social interaction with virtual agents. 2018 IEEE conference on virtual reality and 3d user interfaces (VR). Tuebingen/Reutlingen, Germany: IEEE.
Wieser, M. J., Pauli, P., Grosseibl, M., Molzow, I., and Mühlberger, A. (2010). Virtual social interactions in social anxiety - the impact of sex, gaze, and interpersonal distance. Cyberpsychol. Behav. Soc. Netw. 13, 547–554. doi: 10.1089/cyber.2009.0432
Williams, K. D., and Jarvis, B. (2006). Cyberball: a program for use in research on interpersonal ostracism and acceptance. Behav. Res. Methods 38, 174–180. doi: 10.3758/bf03192765
Wu, Y., Kim, J., Kwon, H., Choi, I., and Song, H. (2024). Beyond the initial impression: Building long-term relationships with health virtual agents by using social media as additional channel. Available online at: https://ssrn.com/abstract=4996131.
Xu, K., Chen, X., and Huang, L. (2022). Deep mind in social responses to technologies: a new approach to explaining the computers are social actors phenomena. Comput. Hum. Behav. 134:107321. doi: 10.1016/j.chb.2022.107321
Zadro, L., Williams, K. D., and Richardson, R. (2004). How low can you go? Ostracism by a computer is sufficient to lower self-reported levels of belonging, control, self-esteem, and meaningful existence. J. Exp. Soc. Psychol. 40, 560–567. doi: 10.1016/j.jesp.2003.11.006
Keywords: artificial agents, avatars, artificial intelligence, deception, demand characteristics
Citation: Rubo M and Neumann I (2025) Are artificial agents perceived similarly to humans? Knowns, unknowns and the road ahead. Front. Psychol. 16:1565170. doi: 10.3389/fpsyg.2025.1565170
Edited by:
Sara Ventura, University of Bologna, ItalyReviewed by:
James Hutson, Lindenwood University, United StatesCopyright © 2025 Rubo and Neumann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Marius Rubo, bWFyaXVzLnJ1Ym9AdW5pYmUuY2g=