Event Abstract

Effects of embodiment on social attention mechanisms in human-robot interaction

  • 1 George Mason University, United States

Introduction. Processing nonverbal behavior when we interact with others is essential for social interactions as it allows us to make predictions about what others feel, think or intend to do (i.e., mentalizing; Frith & Frith, 2006). Of particular importance for social interactions is gaze direction as it not only indicates the gazer’s current focus of interest but also shifts the observer’s attention to gazed-at locations in order to establish joint attention between gazer, observer and a potential object of interest (Frischen, Bayliss & Tipper, 2007). How strongly we react to gaze signals depends on whether we attribute mind to the gazer and treat their changes in gaze direction as intentional (Wiese, Wykowska, Zwickel, & Mueller, 2012). While mind perception occurs naturally when interacting with other humans, nonhuman agents need to trigger mind perception by displaying characteristics such as humanlike appearance (Looser & Wheatley, 2010), reliable behavior (Pfeiffer, Timmermans, Bente, Vogeley, & Schilbach, 2011), or physical embodiment (Kiesler, Powers, Fussell, & Torrey, 2008). As soon as mind is perceived in nonhuman agents, more social relevance is ascribed to their behaviors, and their social signals (i.e., changes in gaze direction) are followed more strongly than when no mind is perceived (see Wiese, Metta & Wykowska, 2017). Although these studies show that beliefs regarding robots’ social capabilities modulate human responses to their actions, they have the major disadvantage that they were conducted in highly controlled laboratory settings, which reduces their generalizability to realworld applications. Since attentional orienting to gaze signals (i.e., gaze cueing; Friesen & Kingstone, 1998) has been shown to be sensitive to social context information such as the similarity between gazer and observer (Hungr & Hunt, 2012), the human-likeness of the gazer (e.g., Martini, Buzzell, & Wiese, 2015), or the gazer’s reliability (Abubshait & Wiese, 2017), it is essential to investigate attentional orienting to robot gaze signals in realistic interactions with embodied robot platforms. In the current experiment, we examine the effect of physical embodiment on the degree to which robot gaze signals are followed. In particular, we display gaze cues on the social robot platform Meka, and measure how increases in embodiment (i.e., from “pictures” to “videos” to “real” interaction) affect the degree to which participants orient their attention in response to the gaze cues. We hypothesize that attentional orienting to Meka’s gaze cues will increase as its physical embodiment increases from low (Meka image as gazer) to medium (Meka video as gazer) to high (real Meka robot as gazer). Methods & Materials. Forty-five participants were recruited from George Mason University and compensated via course credit. Participants were assigned to one of three embodiment levels (i.e., low, medium, high), provided informed consent, and subsequently performed a gaze cueing task with the Meka robot (i.e., images, videos, realistic). The experimental procedure was similar across conditions: each trial started with Meka looking straight at the participant to establish mutual gaze, which was followed by a change in gaze direction to either the left or the right (which established the gaze cue). Several hundred milliseconds later, target items appeared either at the location that was looked at by Meka (i.e., valid trials) or at a different location (i.e., invalid trials), and participants were asked to respond as fast and accurately as possible (via button press) while reaction times and error rates were measured. In the “image” and “video” conditions, participants had to discriminate target letters (“F” versus “T”), and in the “realistic” condition they had to react to the change in color of two light bulbs. In all conditions, Meka cued the observers’ gaze to the actual target location in 50% of all trials (i.e., non-predictive gaze cueing). In the low and medium embodiment conditions, participants interacted with static images or dynamic videos of Meka; in the high embodiment condition, participants performed the gaze cueing task with the fully embodied robot in real time. Participants in the low and medium embodiment condition completed 160 trials; participants in the high embodiment conditions completed 80 trials. After completion, participants were debriefed and thanked for their participation. Figure 1. Embodiment level influences attentional orienting to gaze cues. Gray dots represent single participants. Black dots represent group means. Error bars depict SEM. ** p < .01 Results & Discussion. Preliminary data analysis was based on fifteen participants in each group (45 in total). One participant in the fully embodied or “realistic” condition was excluded due to technical difficulties, resulting in a total of 44 participants. For each condition, reaction times for valid trials were subtracted from reaction times for invalid trials (per participant) and the resulting gaze-cueing effects were subjected to a univariate ANOVA with the between-subjects factor Embodiment (low, medium, high). The ANOVA revealed that the level of physical embodiment influenced the gaze-cueing effect (F(2, 38) = 3.73, p = .033, ηG2= .16), with significant differences in gaze cueing between the “realistic” and the “video” condition (t(25) = 3.34, p = .003, Membodied = 19.87 ms, Mvideo = 4.39 ms), but no significant difference between the “image” and the “video” condition (t(26) = 0.59, p = .56, Mvideo = 4.39 ms, Mpicture = 8.00 ms); see Figure 1. The results suggest that the physical embodiment of a robot can enhance reactions to its nonverbal signals such as changes in gaze direction, potentially by increasing their social relevance (in line with Wiese et al., 2017). This finding suggests that physical embodiment plays an important role when measuring cognitive performance in social interactions between humans and robotic agents. In consequence, more realistic experiments are needed in the future that allow participants to interact with social robots under ecologically valid conditions in order to fully understand the impact robots have on human cognitive processing.

Figure 1

References

Abubshait, A., & Wiese, E. (2017). You look human, but act like a machine: Agent appearance
and behavior modulate different aspects of human – robot interaction. Frontiers in Psychology,
8, 1–12. https://doi.org/10.3389/fpsyg.2017.01393
Frith, C. D., & Frith, U. (2006). How we predict what other people are going to do. Brain Research,
1079(1), 36–46. https://doi.org/10.1016/j.brainres.2005.12.126
Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: Visual attention,
social cognition, and individual differences. Psychological Bulletin, 133(4), 694–724.
https://doi.org/10.1037/0033-2909.133.4.694.Gaze
Friesen, C. K., & Kingstone, A. (1998). The eyes have it! Reflexive orienting is triggered by
nonpredictive gaze. Psychonomic Bulletin & Review, 5(3), 490–495.
https://doi.org/10.3758/BF03208827
Hungr, C. J., & Hunt, A. R. (2012). Physical self-similarity enhances the gaze-cueing effect.
Quarterly Journal of Experimental Psychology, 65(7), 1250–1259.
https://doi.org/10.1080/17470218.2012.690769
Kiesler, S., Powers, A., Fussell, S. R., & Torrey, C. (2008). Anthropomorphic interactions with a
robot and robot–like agent. Social Cognition, 26(2), 169–181.
https://doi.org/10.1521/soco.2008.26.2.169
Looser, C. E., & Wheatley, T. (2010). The tipping point of animacy. How, when, and where we
perceive life in a face. Psychological Science, 21(12), 1854–1862.
https://doi.org/10.1177/0956797610388044
Martini, M., Buzzell, G., & Wiese, E. (2015). Agent appearance modulates mind attribution and
social attention in human-robot interaction. In Social Robotics (Vol. 1, pp. 431–439).
https://doi.org/10.1007/978-3-319-25554-5
Pfeiffer, U. J., Timmermans, B., Bente, G., Vogeley, K., & Schilbach, L. (2011). A non-verbal
turing test: Differentiating mind from machine in gaze-based social interaction. PLoS ONE,
6(11). https://doi.org/10.1371/journal.pone.0027591
Wiese, E., Metta, G., & Wykowska, A. (2017). Robots as intentional agents: Using neuroscientific
methods to make robots appear more social. Frontiers in Psychology, 8(October),
1663. https://doi.org/10.3389/fpsyg.2017.01663
Wiese, E., Wykowska, A., Zwickel, J., & Müller, H. J. (2012). I see what you mean: How attentional
selection is shaped by ascribing intentions to others. PLoS ONE, 7(9), 1–7.
https://doi.org/10.1371/journal.pone.0045391

Keywords: human-robot interaction, social attention, social robotics, social cognition, Embodied Cognition

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Abubshait A, Weis P and Wiese E (2019). Effects of embodiment on social attention mechanisms in human-robot interaction. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00080

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 03 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Mr. Abdulaziz Abubshait, George Mason University, Fairfax, United States, abubsh@gmail.com