Event Abstract

Mind perception modulates social attention in real-time human-robot interaction

  • 1 George Mason University, United States

SUMMARY. In everyday interactions, we use information from gestures, facial expression or gaze direction to make inferences about what others think, feel or intend to do. How we react to these cues is determined by how much social relevance we ascribe to them and to what degree they are believed to originate from an entity with a mind, capable of having internal states like emotions or intentions (i.e. mind perception; Wiese, Metta, & Wykowska, 2017). Changes in gaze direction (i.e., gaze cues; (Friesen & Kingstone, 1998) are followed more strongly (i.e., gaze cueing; (Frischen, Bayliss, & Tipper, 2007), for instance, when they are accompanied by a fearful rather than a neutral expression (Graham, Friesen, Fichtenholtz, & LaBar, 2010), or when they are believed to be intentional rather than pre-programmed or random (Wiese, Wykowska, Zwickel, & Müller, 2012). Mind perception in others is not exclusive to human agents, but mind can also be ascribed to entities who do not have minds (i.e., robots) or whose mind status is ambiguous (i.e., animals) as long as their appearance and/or behavior allows them to be perceived as intentional beings (Gray, Gray, & Wegner, 2007). When mind is perceived in others, they are ascribed experience (i.e., ability to sense and feel) and agency (i.e., ability to plan and act), and receive higher levels of prosociality and empathy than agents who fail to trigger mind perception (Waytz, Gray, Epley, & Wegner, 2010). While humans are normally ascribed both experience and agency, other entities are usually ascribed only experience (moral patients; e.g., dog, baby) or agency (moral agents; e.g., god, robot), and describing robots as being capable of experiencing emotional states has been reported to induce feelings of eeriness (Wegner & Gray, 2017). Previous studies have shown that when participants believe that a robotic agent possesses agency its gaze signals are followed more strongly than when its actions are believed to be pre-programmed (Wiese et al., 2017). While these studies show that beliefs regarding robots’ social capabilities modulate human responses to their actions, they have the major disadvantage that their results are hard to generalize to everyday human-robot interactions since abstract images were used as gazing stimuli instead of dynamic gaze signals of embodied robot platforms. To overcome this shortcoming, we examine in the current experiment whether specific beliefs about a robot’s agency and experience modulate gaze-cueing effects in real-time human-robot interactions. We use the social robot platform Meka and hypothesize that manipulations of agency (e.g., participants are told that the robot has internal states based on which it moves its eyes) increase the social relevance of the robot’s gaze signals and lead to enhanced gaze-cueing effects, whereas experience-manipulations (e.g., participants are told that the robot can feel emotions) induce feelings of eeriness and lead to attenuated gaze-cueing effects. METHODS & MATERIALS. 20 undergraduate students participated in the experiment and received course credit for their participation. The experimental setup consisted of the Meka Robotics S2 humanoid robot head, two smart light bulbs that were connected via wifi, and a touch pad that participants used to respond (see Figure 1). The gaze-cueing task required participants to indicate via pressing the corresponding key on the touch pad whether the light bulb left or right of Meka changed its color (on a trial-by-trial basis). Crucially for the gaze-cueing task, before the light bulbs change their color, Meka looked either to the side where the target event was about to happen (i.e., valid trial) or to the opposite side (i.e., invalid trial) by moving its eyes. The left and right light bulbs were gazed-at in equal frequencies (i.e., 50% cue predictivity). Meka was positioned centrally with respect to the participants during the entire task and initiated each trial by looking at the participants (i.e., mutual gaze). At the beginning of the experiment, participants first gave informed consent, and then read the instructions for the gaze-cueing task, introducing Meka as (a) having intentions and the ability to show goal directed behaviors (i.e., agency manipulation), or (b) having emotions and the ability to feel others’ emotions (i.e., experience manipulation). In reality, the two conditions were identical and participants performed 80 trials of gaze cueing (40 valid, 40 invalid) with Meka while error rates and reaction times were measured. RESULTS & DISCUSSION. Reaction time differences between invalid and valid trials were calculated by subtracting (on the single subject-level) mean reaction times of valid from invalid trials (i.e., gaze-cueing effect) and subjected to a univariate ANOVA with the between-participants factor Belief (agency, experience). The analysis revealed a main effect of Belief with gaze-cueing effects being larger in the agency condition than in the experience condition. The results show that (a) a modulation of gaze cueing due to differential beliefs about a robot’s mind status can be found in real-time interactions with embodied robot platforms, and (b) believing that a robot has “agency” enhances the degree to which its gaze signals are followed while believing that a robot has “experience” decreases gaze cueing effects. Future studies need to assess why associating a robot with the ability to “feel” negatively affects to what extent its gaze is followed, but previous research suggests that it might be related to a categorization conflict (i.e., robots are usually not ascribed the capability to feel; Gray & Wegner, 2012), which requires cognitive resources to be resolved and potentially distracts participants from the social interaction (Weis & Wiese, 2017).

Figure 1

References

REFERENCES

Friesen, C. K., & Kingstone, A. (1998). The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5(3), 490–495. https://doi.org/10.3758/BF03208827
Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze Cueing of Attention. Psychological Bulletin, 133(4), 694–724. https://doi.org/10.1037/0033-2909.133.4.694
Graham, R., Friesen, C. K., Fichtenholtz, H. M., & LaBar, K. S. (2010). Modulation of reflexive orienting to gaze direction by facial expressions. Visual Cognition, 18(3), 331–368. https://doi.org/10.1080/13506280802689281
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of Mind Perception. Science, 315(5812), 619–619. https://doi.org/10.1126/science.1134475
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130. https://doi.org/10.1016/j.cognition.2012.06.007
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383–388. https://doi.org/10.1016/j.tics.2010.05.006
Wegner, D. M., & Gray, K. (2017). The Mind Club: Who Thinks, What Feels, and Why It Matters. Penguin.
Weis, P. P., & Wiese, E. (2017). Cognitive Conflict as Possible Origin of the Uncanny Valley. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 61, pp. 1599–1603). https://doi.org/10.1177/1541931213601763
Wiese, E., Metta, G., & Wykowska, A. (2017). Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.01663
Wiese, E., Wykowska, A., Zwickel, J., & Müller, H. J. (2012). I See What You Mean: How Attentional Selection Is Shaped by Ascribing Intentions to Others. PLOS ONE, 7(9), e45391. https://doi.org/10.1371/journal.pone.0045391

Keywords: Human-robot interaction (HRI), gaze cueing, Mind Perception, Anthropomorphism, social attention

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Momen A and Wiese E (2019). Mind perception modulates social attention in real-time human-robot interaction. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00079

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 02 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Dr. Eva Wiese, George Mason University, Fairfax, United States, ewiese@gmu.edu