<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="discussion" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Virtual Real.</journal-id>
<journal-title>Frontiers in Virtual Reality</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Virtual Real.</abbrev-journal-title>
<issn pub-type="epub">2673-4192</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">798899</article-id>
<article-id pub-id-type="doi">10.3389/frvir.2021.798899</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Virtual Reality</subject>
<subj-group>
<subject>Opinion</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Look Into my &#x201c;Virtual&#x201d; Eyes: What Dynamic Virtual Agents add to the Realistic Study of Joint Attention</article-title>
<alt-title alt-title-type="left-running-head">Gregory et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Virtual Agents and Joint Attention</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Gregory</surname>
<given-names>Samantha E. A.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="fn" rid="fn1">
<sup>&#x2020;</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1373340/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kelly</surname>
<given-names>Cl&#xed;ona L.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="fn" rid="fn1">
<sup>&#x2020;</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1550475/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kessler</surname>
<given-names>Klaus</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/21808/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Department of Psychology, University of Salford, <addr-line>Salford</addr-line>, <country>United&#x20;Kingdom</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Aston Laboratory for Immersive Virtual Environments, Aston Institute of Health and Neurodevelopment, Aston University, <addr-line>Birmingham</addr-line>, <country>United&#x20;Kingdom</country>
</aff>
<aff id="aff3">
<label>
<sup>3</sup>
</label>School of Psychology, University College Dublin, <addr-line>Dublin</addr-line>, <country>Ireland</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/588947/overview">Evelien Heyselaar</ext-link>, Radboud University, Netherlands</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/10124/overview">Hiroshi Ashida</ext-link>, Kyoto University, Japan</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Samantha E. A. Gregory, <email>s.e.a.gregory@salford.ac.uk</email>
</corresp>
<fn fn-type="equal" id="fn1">
<label>
<sup>&#x2020;</sup>
</label>
<p>These authors have contributed equally to this work and share first authorship</p>
</fn>
<fn fn-type="other">
<p>This article was submitted to Virtual Reality and Human Behaviour, a section of the journal Frontiers in Virtual Reality</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>12</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>2</volume>
<elocation-id>798899</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>11</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Gregory, Kelly and Kessler.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Gregory, Kelly and Kessler</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<kwd-group>
<kwd>joint attention</kwd>
<kwd>virtual agents</kwd>
<kwd>dynamic gaze</kwd>
<kwd>avatars</kwd>
<kwd>social interaction</kwd>
<kwd>ecological validity</kwd>
<kwd>virtual reality</kwd>
<kwd>social cognition</kwd>
</kwd-group>
<contract-sponsor id="cn001">Leverhulme Trust<named-content content-type="fundref-id">10.13039/501100000275</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Joint attention, defined as the coordination of orienting between two or more people toward an object, person or event (<xref ref-type="bibr" rid="B1">Billeci et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B35">Mundy and Newell, 2007</xref>; <xref ref-type="bibr" rid="B45">Scaife and Bruner, 1975</xref>), is one of the essential mechanisms of social interaction (<xref ref-type="bibr" rid="B9">Chevalier et&#x20;al., 2020</xref>). This joint attention can be signalled through both verbal and visual cues, with gaze direction providing an important visual signal of attention and thus joint attention (<xref ref-type="bibr" rid="B25">Kleinke, 1986</xref>; <xref ref-type="bibr" rid="B13">Emery, 2000</xref>; <xref ref-type="bibr" rid="B30">Land and Tatler, 2009</xref>). Therefore, it is crucial to understand how eye gaze is used as part of joint attention in social scenarios. Importantly, the emergence of virtual agents and social robots gives us the opportunity to better understand these processes as they may occur in human-to-human interaction, as well as enabling realistic human to agent/robot interaction.</p>
<p>Traditionally, the influence of gaze on attentional orienting has been studied in cueing paradigms based on Posner&#x2019;s original spatial task (e.g., <xref ref-type="bibr" rid="B40">Posner, 1980</xref>). Using simple, static face stimuli, e.g., photographs of real faces or drawings of schematic faces (see <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel A), results show that gaze provides a very strong attentional cue (<xref ref-type="bibr" rid="B18">Frischen et&#x20;al., 2007</xref>). Importantly, though the basic cueing effect has also been replicated with non-social cues such as arrows or direction words (<xref ref-type="bibr" rid="B21">Hommel et&#x20;al., 2001</xref>; <xref ref-type="bibr" rid="B43">Ristic et&#x20;al., 2002</xref>; <xref ref-type="bibr" rid="B48">Tipples, 2002</xref>, <xref ref-type="bibr" rid="B49">2008</xref>), it is argued that the strength and immediacy of gaze cuing demonstrates something special about how we respond to social information, and in particular eye gaze (<xref ref-type="bibr" rid="B18">Frischen et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B23">Kampis and Southgate, 2020</xref>; <xref ref-type="bibr" rid="B47">Stephenson et&#x20;al., 2021</xref>).<boxed-text id="Box1">
<label>BOX 1</label>
<p>Examples of the range of stimuli used in research presented in this review rated on realism, flexibility (of movement) and experimental control. The left side begins with highly controlled but unrealistic and inflexible stimuli of static faces <bold>(A)</bold>, next we show the poorly controlled, but flexible and potentially realistic approach of using real humans <bold>(B)</bold>, though arguably realism is weakened by the contrived nature of studies and environments, with this being exacerbated by the addition of experimentally necessary but distinctive brain imaging technology in the examples shown here. Finally, on the right we arguably achieve greater realism and control, as well as reasonable movement flexibility by using dynamic virtual agents <bold>(C)</bold> where the eyes and heads can move. Presented examples have either been reproduced with permission or created for the box to be representative of stimuli used. 1. Schematic faces, e.g., <xref ref-type="bibr" rid="B17">Friesen and Kingstone, (1998)</xref>; 2. &#x201c;Dynamic&#x201d; photograph gaze shift stimuli where a direct face is present prior to an averted gaze face&#x2a;, e.g., <xref ref-type="bibr" rid="B8">Chen et&#x20;al. (2021)</xref>; 3. Averted gaze photograph stimuli&#x2a;, e.g., <xref ref-type="bibr" rid="B12">Driver et&#x20;al. (1999)</xref>; 4. Human-human interaction, reproduced with permission from <xref ref-type="bibr" rid="B11">Dravida et&#x20;al. (2020)</xref>; 5. Human-human shared attention paradigm, reproduced with permission from <xref ref-type="bibr" rid="B28">Lachat et&#x20;al. (2012)</xref>; 6. Human as gaze cue, example used with permission from <xref ref-type="bibr" rid="B10">Cole et&#x20;al. (2015)</xref>; 7. Dynamic virtual agent head, permitted reproduction from <xref ref-type="bibr" rid="B7">Caruana et&#x20;al. (2020)</xref>; 8. Virtual agent as gaze cue (lab stimuli), as used in <xref ref-type="bibr" rid="B20">Gregory (2021)</xref>; 9. Virtual agent as interaction partner with realistic gaze movement (lab stimuli) Kelly at al., (in prep; see video <ext-link ext-link-type="uri" xlink:href="https://youtu.be/sgrxOpYP91E">youtu.be/sgrxOpYP91E</ext-link>).</p>
<p>&#x2a;Face images from The Radboud Faces database, <xref ref-type="bibr" rid="B31">Langner et&#x20;al. (2010)</xref>.</p>
<p>
<inline-graphic xlink:href="frvir-02-798899-fx1.tif"/>
</p>
</boxed-text>
</p>
<p>While research conducted into gaze cuing phenomena to date has been informative, the use of simplistic and most often static face stimuli such as photographs of real faces, drawings of schematic faces or even just eyes (see <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel A), as well as the frequent use of unrealistic tasks and environments (e.g., a face image &#x201c;floating&#x201d; in a 2D spatial environment) has been highlighted as problematic if we truly wish to understand social processes (<xref ref-type="bibr" rid="B19">Gobel et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B41">Risko et&#x20;al., 2012</xref>, <xref ref-type="bibr" rid="B42">2016</xref>; <xref ref-type="bibr" rid="B50">Zaki and Ochsner, 2009</xref>). For example, while research in traditional 2D settings shows that participants tend to focus on the eye region of faces (<xref ref-type="bibr" rid="B2">Birmingham et&#x20;al., 2008</xref>), research conducted in real life shows participants avoiding direct eye contact (<xref ref-type="bibr" rid="B14">Foulsham et&#x20;al., 2011</xref>; <xref ref-type="bibr" rid="B29">Laidlawet&#x20;al., 2011</xref>; <xref ref-type="bibr" rid="B32">Mansour and Kuhn, 2019</xref>). Since the engagement of eye contact and the following of eye gaze direction are crucial aspects of joint attention, it is extremely important to look for alternative approaches that may better reflect real life social interaction.</p>
</sec>
<sec id="s2">
<title>Investigating Joint Attention Using Real Humans</title>
<p>One obvious option to better understand how joint attention might manifest in real world social interaction is to observe interaction between real humans. However, while observational studies can be informative, it is also important to employ experimental studies in order to build theories and test hypotheses about the nature of human interaction. Indeed, a number of empirical studies have been conducted using real human interaction, which in principle replicate known gaze cuing and joint attention effects (<xref ref-type="bibr" rid="B10">Cole et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B11">Dravida et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B28">Lachat et&#x20;al., 2012</xref>, see <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel B). However, using real humans as interaction partners in such empirical studies has a number of important limitations which affect the kinds of studies and level of nuance possible.</p>
<p>First, such studies are resource heavy, requiring a confederate&#x2019;s concentration and time, thus limiting the number of repetitions possible (trials and participants). Second, study design is limited; for example, it is impossible to subtly change timings of gaze shifts in the millisecond range, change the identity of the gaze cue during the task, and change other aspects of stimulus presentation. Third, experimental control is limited; real people will not perform the same action in the same way multiple times during an experimental session and are also unlikely to behave in exactly the same way in each different experimental session. Further, they may make many involuntary nuanced facial expressions such as smirking or raising their eyebrows, as well as uncontrollable micro expressions (e.g., <xref ref-type="bibr" rid="B38">Porter and ten Brinke, 2008</xref>; <xref ref-type="bibr" rid="B39">Porter et&#x20;al., 2012</xref>), potentially affecting the validity of the study (e.g., <xref ref-type="bibr" rid="B27">Kuhlen and Brennan, 2013</xref>). Finally, while using human confederates, participants are often taking part in a very unnatural &#x201c;social&#x201d; experience, i.e.,&#x20;engaging in a highly artificial task in a lab environment while being stared at by a stranger who is not communicating in a particularly natural way (See <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel&#x20;B).</p>
<p>Therefore, while using real humans provides real-life dynamic interaction, this comes at the cost of experimental control and design complexity. While others have suggested the use of social robots (e.g., <xref ref-type="bibr" rid="B9">Chevalier et&#x20;al., 2020</xref>), here, we propose that virtual agents are the optimum alternative for the experimental study of joint attention.</p>
</sec>
<sec id="s3">
<title>Investigating Joint Attention Using Virtual Agents</title>
<p>Virtual Agents (VAs: also referred to as virtual humans or characters) are defined as computer-generated virtual reality characters with human-like appearances, in contrast to an avatar, which is a humanoid representation of a user in a virtual world (<xref ref-type="bibr" rid="B37">Pan and Hamilton, 2018</xref>). These VAs offer the opportunity to conduct social interaction studies with higher realism while retaining experimental control. Importantly, similar social behaviours have been found during interactions with VAs as are observed during real human interaction (for reviews and further discussion see <xref ref-type="bibr" rid="B3">Bombari et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B26">Kothgassner and Felnhofer, 2020</xref>; <xref ref-type="bibr" rid="B37">Pan and Hamilton, 2018</xref>).</p>
<p>The use of VAs is highly cost effective, with ready-made or easy to modify assets available on several platforms (Examples: Mixamo: <ext-link ext-link-type="uri" xlink:href="http://www.mixamo.com">www.mixamo.com</ext-link>; SketchFab: <ext-link ext-link-type="uri" xlink:href="http://sketchfab.com">sketchfab.com</ext-link>; MakeHuman: <ext-link ext-link-type="uri" xlink:href="http://www.makehumancommunity.org">www.makehumancommunity.org</ext-link>). These VA assets can then be manipulated for the purposes of a study using free software such as Blender for animation and Unity3D for experimental development and control. The VAs can be presented in fully immersive virtual systems through immersive VR headsets as well as through augmented virtual reality and finally through basic computer setups, each of which having pros and cons related to expense, portability and immersion (<xref ref-type="bibr" rid="B37">Pan and Hamilton, 2018</xref>). Importantly, even using VA-based stimuli (e.g., dynamic video recordings) in traditional screen-based studies offers significantly more social nuance and realism compared to using static images of disembodied heads (e.g., <xref ref-type="bibr" rid="B20">Gregory, 2021</xref>, see <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel C). Indeed, gaze is associated with an intent to act (<xref ref-type="bibr" rid="B30">Land and Tatler, 2009</xref>) and it is difficult to imagine a static head performing a goal directed action.</p>
<p>When considering using VAs as social partners, researchers may consider factors such as the realism of the VAs, including the issue of the &#x201c;uncanny valley&#x201d; and whether the participant perceives the VA as a social partner at all. It may be argued that people will not interact with VAs in the same way they will with real humans. However, research indicates that this concern is unfounded (<xref ref-type="bibr" rid="B3">Bombari et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B26">Kothgassner and Felnhofer, 2020</xref>; <xref ref-type="bibr" rid="B37">Pan and Hamilton, 2018</xref>), indeed, research in moral psychology demonstrates strong realistic response in VR (<xref ref-type="bibr" rid="B15">Francis et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B16">Francis et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B36">Niforatos et&#x20;al., 2020</xref>).</p>
<p>The &#x201c;uncanny valley&#x201d; refers to our loss of affinity to computer-generated agents when they fail to attain human-like realism (<xref ref-type="bibr" rid="B34">Mori et&#x20;al., 2012</xref>). This can serve as a reminder that the VA is not real, limiting the naturalness of the interaction. First, it is important to note that the same arguments of unnaturalness can also be applied to some studies using real humans, as discussed above and presented in <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel B. Therefore, even rudimentary VAs can be beneficial when investigating social interaction. Indeed, research suggests that even presenting the most basic VA can be successful if the eyes are communicative, with findings showing responses to gaze in human-to-agent interactions are comparable to those in human-human interaction (<xref ref-type="bibr" rid="B44">Ruhland et&#x20;al., 2015</xref>).</p>
<p>Joint attention as initiated through eye gaze is therefore an area ripe for the use of VAs because it is generally investigated in isolation from verbal cues, as well as in isolation from body movements which can affect responses independently (e.g., <xref ref-type="bibr" rid="B33">Mazzarella et&#x20;al., 2012</xref>), mitigating concerns regarding the complexity of producing realistic speech and action (<xref ref-type="bibr" rid="B37">Pan and Hamilton, 2018</xref>). Importantly, for use in joint attention research VAs can mimic the dynamic behaviour of human gaze with significant precision, with VAs having full eye movement available (e.g., <xref ref-type="bibr" rid="B44">Ruhland et&#x20;al., 2015</xref>). Similar to whole-body motion capture for animations, it is possible to apply real-time and/or recorded human eye movements onto VAs, e.g., by using HMDs with eye-tracking such as the HTC Vive Pro Eye (See video: <ext-link ext-link-type="uri" xlink:href="https://youtu.be/a5yBl4OL500">https://youtu.be/sgrxOpYP91E</ext-link>), creating the impression of a highly naturalistic interaction, but one with the key aspects of experimental control. This therefore goes significantly beyond traditional gaze cuing studies where gaze is presented statically in the desired directions (See <xref ref-type="boxed-text" rid="Box1">Box 1</xref>, panel A). Importantly, VAs allow investigation of social-interaction with real human gaze sequences as well as allowing the possibility of a closed-loop system that would determine the VAs&#x2019; behaviour based on the users&#x2019; gaze (<xref ref-type="bibr" rid="B24">Kelly et&#x20;al., 2020</xref>). This therefore allows fundamental investigations into how a person responds when their gaze is followed.</p>
<p>Going beyond eye gaze, VAs also enable the consistent use of age-matched stimuli, especially when investigating child development or aging, where it may be difficult to use age-matched confederates. Age matching as well as controlled age mismatching can be vital in understanding changes in joint attention and other aspects of social cognition across the lifespan (e.g., <xref ref-type="bibr" rid="B46">Slessor et&#x20;al., 2010</xref>), as well as when investigating differences in children with learning difficulties related to ADHD and autism diagnosis (e.g., <xref ref-type="bibr" rid="B4">Bradley and Newbutt, 2018</xref>; <xref ref-type="bibr" rid="B22">Jyoti et&#x20;al., 2019</xref>). VAs also facilitate investigations of cueing with a controlled variety of different &#x201c;people&#x201d; as well as more general effects of multiagent joint attention (<xref ref-type="bibr" rid="B5">Capozzi et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B6">Capozzi et&#x20;al., 2018</xref>). Indeed, the presentation of VAs allows full control of the distance between each partner and their gaze behaviour while allowing for a natural configuration of people in a room; a difficult endeavour with multiple human confederates.</p>
<p>Virtual scenarios also enable the manipulation of the environment in which the interaction occurs, and of the stimuli presented as part of the task. This allows investigation of how interactions may occur differently in indoor vs outdoor environments, as well as comparing formal learning environments like a classroom to less formal environments. Consequently, future investigations of joint attention in more dynamic, interactive VR scenarios could focus on how the participant explores the virtual world(s) with their virtual social partner(s).</p>
</sec>
<sec sec-type="conclusion" id="s4">
<title>Conclusion</title>
<p>The use of virtual reality and agents (VAs) in studying human interaction allows researchers a high level of experimental control while allowing participants to engage naturally in a realistic experimental environment. This enables researchers to study the nuances of joint attention&#x2014;and social interaction more generally&#x2014;without the common pitfalls of both simplistic static face-stimuli as well as studies using real humans. In contrast to using human-to-human interaction, the functionality and control of VAs mitigates any lack of realism that may be experienced. Arguably it is more natural to interact with a VA presented in a realistic VR environment than with a real human in unusual experimental settings/headgear (see <xref ref-type="boxed-text" rid="Box1">Box 1</xref>). Further, though it has been proposed that social robots are an ideal alternative to human to human interaction, particularly in terms of control and realism of eye gaze interaction (<xref ref-type="bibr" rid="B9">Chevalier et&#x20;al., 2020</xref>), VAs, particularly when presented in fully immersive VR environments, are more versatile, more cost effective and potentially more realistic than social robots. VAs offer infinite options for quick adjustments in terms of appearance (e.g., age), environment and kinematics as well as a significantly higher control over the experimental setting. This is further corroborated by the accelerated emergence of the so-called &#x201c;Metaverse&#x201d; and we therefore argue that future breakthroughs in understanding human social interaction will likely come from investigations using VAs believed to have genuine agency in a task. Therefore, having VAs that can react based on a participant&#x2019;s behaviour, and in particular to their gaze behaviour, opens the door to countless experiments that can provide stronger insights into how joint attention affects our cognition.</p>
</sec>
</body>
<back>
<sec id="s5">
<title>Author Contributions</title>
<p>SG and CK contributed equally to this work and share first authorship. KK is the senior author. All authors worked together on conceiving and drafting the manuscript, working collaboratively on a shared google document. All authors read, and approved the submitted version.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>This work was supported by a Leverhulme Trust early career fellowship (ECF-2018-130) awarded to S.E.A. Gregory.</p>
</sec>
<sec sec-type="COI-statement" id="s7">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billeci</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Narzisi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tonacci</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sbriscia-Fioretti</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Serasini</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Fulceri</surname>
<given-names>F.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>An Integrated EEG and Eye-Tracking Approach for the Study of Responding and Initiating Joint Attention in Autism Spectrum Disorders</article-title>. <source>Sci. Rep.</source> <volume>7</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-017-13053-4</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Birmingham</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bischof</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Gaze Selection in Complex Social Scenes</article-title>. <source>Vis. Cogn.</source> <volume>16</volume> (<issue>2</issue>), <fpage>341</fpage>&#x2013;<lpage>355</lpage>. <pub-id pub-id-type="doi">10.1080/13506280701434532</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bombari</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Schmid Mast</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Canadas</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bachmann</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Studying Social Interactions through Immersive Virtual Environment Technology: Virtues, Pitfalls, and Future Challenges</article-title>. <source>Front. Psychol.</source> <volume>6</volume> (<issue>June</issue>), <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2015.00869</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Newbutt</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Autism and Virtual Reality Head-Mounted Displays: a State of the Art Systematic Review</article-title>. <source>Jet</source> <volume>12</volume> (<issue>3</issue>), <fpage>101</fpage>&#x2013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1108/JET-01-2018-0004</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Capozzi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bayliss</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Elena</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Becchio</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>One Is Not Enough: Group Size Modulates Social Gaze-Induced Object Desirability Effects</article-title>. <source>Psychon. Bull. Rev.</source> <volume>22</volume> (<issue>3</issue>), <fpage>850</fpage>&#x2013;<lpage>855</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-014-0717-z</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Capozzi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bayliss</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Ristic</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Gaze Following in Multiagent Contexts: Evidence for a Quorum-like Principle</article-title>. <source>Psychon. Bull. Rev.</source> <volume>25</volume> (<issue>6</issue>), <fpage>2260</fpage>&#x2013;<lpage>2266</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-018-1464-3</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caruana</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Alhasan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Woolgar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>McArthur</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>The Effect of Non-communicative Eye Movements on Joint Attention</article-title>. <source>Q. J.&#x20;Exp. Psychol.</source> <volume>73</volume> (<issue>12</issue>), <fpage>2389</fpage>&#x2013;<lpage>2402</lpage>. <pub-id pub-id-type="doi">10.1177/1747021820945604</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>McCrackin</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Morgan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Itier</surname>
<given-names>R. J.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>The Gaze Cueing Effect and its Enhancement by Facial Expressions Are Impacted by Task Demands: Direct Comparison of Target Localization and Discrimination Tasks</article-title>. <source>Front. Psychol.</source> <volume>12</volume> (<issue>March</issue>), <fpage>696</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2021.618606</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chevalier</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kompatsiari</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ciardo</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Wykowska</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Examining Joint Attention with the Use of Humanoid Robots-A New Approach to Study Fundamental Mechanisms of Social Cognition</article-title>. <source>Psychon. Bull. Rev.</source> <volume>27</volume> (<issue>2</issue>), <fpage>217</fpage>&#x2013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-019-01689-4</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cole</surname>
<given-names>G. G.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>D. T.</given-names>
</name>
<name>
<surname>Atkinson</surname>
<given-names>M. A.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Mental State Attribution and the Gaze Cueing Effect</article-title>. <source>Atten Percept Psychophys</source> <volume>77</volume> (<issue>4</issue>), <fpage>1105</fpage>&#x2013;<lpage>1115</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-014-0780-6</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dravida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Noah</surname>
<given-names>J.&#x20;A.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Hirsch</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Joint Attention during Live Person-To-Person Contact Activates rTPJ, Including a Sub-component Associated with Spontaneous Eye-To-Eye Contact</article-title>. <source>Front. Hum. Neurosci.</source> <volume>14</volume>. <pub-id pub-id-type="doi">10.3389/fnhum.2020.00201</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ricciardelli</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kidd</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Maxwell</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Baron-Cohen</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Gaze Perception Triggers Reflexive Visuospatial Orienting</article-title>. <source>Vis. Cogn.</source> <volume>6</volume> (<issue>5</issue>), <fpage>509</fpage>&#x2013;<lpage>540</lpage>. <pub-id pub-id-type="doi">10.1080/135062899394920</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Emery</surname>
<given-names>N. J.</given-names>
</name>
</person-group> (<year>2000</year>). <article-title>The Eyes Have it: The Neuroethology, Function and Evolution of Social Gaze</article-title>. <source>Neurosci. Biobehavioral Rev.</source> <volume>24</volume> (<issue>6</issue>), <fpage>581</fpage>&#x2013;<lpage>604</lpage>. <pub-id pub-id-type="doi">10.1016/S0149-7634(00)00025-7</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foulsham</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>The where, what and when of Gaze Allocation in the Lab and the Natural Environment</article-title>. <source>Vis. Res.</source> <volume>51</volume> (<issue>17</issue>), <fpage>1920</fpage>&#x2013;<lpage>1931</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2011.07.002</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Francis</surname>
<given-names>K. B.</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>I. S.</given-names>
</name>
<name>
<surname>Gummerum</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ganis</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>G.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). <article-title>Virtual Morality: Transitioning from Moral Judgment to Moral Action?</article-title> <source>PLoS ONE</source> <volume>11</volume> (<issue>10</issue>), <fpage>e0164374</fpage>&#x2013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0164374</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Francis</surname>
<given-names>K. B.</given-names>
</name>
<name>
<surname>Terbeck</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Briazu</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Haines</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gummerum</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ganis</surname>
<given-names>G.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Simulating Moral Actions: An Investigation of Personal Force in Virtual Moral Dilemmas</article-title>. <source>Sci. Rep.</source> <volume>7</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-017-13909-9</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friesen</surname>
<given-names>C. K.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>The Eyes Have it! Reflexive Orienting Is Triggered by Nonpredictive Gaze</article-title>. <source>Psychon. Bull. Rev.</source> <volume>5</volume> (<issue>3</issue>), <fpage>490</fpage>&#x2013;<lpage>495</lpage>. <pub-id pub-id-type="doi">10.3758/BF03208827</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frischen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bayliss</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Tipper</surname>
<given-names>S. P.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Gaze Cueing of Attention: Visual Attention, Social Cognition, and Individual Differences</article-title>. <source>Psychol. Bull.</source> <volume>133</volume> (<issue>4</issue>), <fpage>694</fpage>&#x2013;<lpage>724</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.133.4.694</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gobel</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>H. S.</given-names>
</name>
<name>
<surname>Richardson</surname>
<given-names>D. C.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>The Dual Function of Social Gaze</article-title>. <source>Cognition</source> <volume>136</volume>, <fpage>359</fpage>&#x2013;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2014.11.040</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gregory</surname>
<given-names>S. E. A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Investigating Facilitatory versus Inhibitory Effects of Dynamic Social and Non-social Cues on Attention in a Realistic Space</article-title>. <source>Psychol. Res.</source> <volume>1</volume>. <fpage>0123456789</fpage>. <pub-id pub-id-type="doi">10.1007/s00426-021-01574-7</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hommel</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pratt</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Colzato</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Godijn</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>Symbolic Control of Visual Attention</article-title>. <source>Psychol. Sci.</source> <volume>12</volume> (<issue>5</issue>), <fpage>360</fpage>&#x2013;<lpage>365</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00367</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Jyoti</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lahiri</surname>
<given-names>U.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Virtual Reality Based Avatar-Mediated Joint Attention Task for Children with Autism: Implication on Performance and Physiology</article-title>,&#x201d; in <conf-name>2019 10th International Conference on&#x20;Computing, Communication and Networking Technologies, ICCCNT</conf-name>, <conf-loc>IIT KANPUR</conf-loc>, <conf-date>July 2019</conf-date>, <publisher-name>IEEE</publisher-name>, <fpage>1</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/ICCCNT45670.2019.8944467</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kampis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Southgate</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Altercentric Cognition: How Others Influence Our Cognitive Processing</article-title>. <source>Trends Cogn. Sci.</source> <volume>24</volume>, <fpage>945</fpage>&#x2013;<lpage>959</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2020.09.003</pub-id> </citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kelly</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Bernardet</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Kessler</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>A Neuro-VR Toolbox for Assessment and Intervention in Autism: Brain Responses to Non-verbal, Gaze and Proxemics Behaviour in Virtual Humans</article-title>,&#x201d; in <source>2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)</source>, <conf-date>March 2020</conf-date>, <conf-loc>United&#x20;Kingdom</conf-loc>, (<publisher-name>IEEE</publisher-name>), <fpage>565</fpage>&#x2013;<lpage>566</lpage>. <pub-id pub-id-type="doi">10.1109/vrw50115.2020.00134</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kleinke</surname>
<given-names>C. L.</given-names>
</name>
</person-group> (<year>1986</year>). <article-title>Gaze and Eye Contact: A Research Review</article-title>. <source>Psychol. Bull.</source> <volume>100</volume> (<issue>1</issue>), <fpage>78</fpage>&#x2013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.100.1.78</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kothgassner</surname>
<given-names>O. D.</given-names>
</name>
<name>
<surname>Felnhofer</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Does Virtual Reality Help to Cut the Gordian Knot between Ecological Validity and Experimental Control?</article-title> <source>Ann. Int. Commun. Assoc.</source> <volume>44</volume>, <fpage>210</fpage>&#x2013;<lpage>218</lpage>. <comment>Routledge</comment>. <pub-id pub-id-type="doi">10.1080/23808985.2020.1792790</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuhlen</surname>
<given-names>A. K.</given-names>
</name>
<name>
<surname>Brennan</surname>
<given-names>S. E.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Language in Dialogue: When Confederates Might Be Hazardous to Your Data</article-title>. <source>Psychon. Bull. Rev.</source> <volume>20</volume> (<issue>1</issue>), <fpage>54</fpage>&#x2013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-012-0341-8</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lachat</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Conty</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hugueville</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>George</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Gaze Cueing Effect in a Face-To-Face Situation</article-title>. <source>J.&#x20;Nonverbal Behav.</source> <volume>36</volume> (<issue>3</issue>), <fpage>177</fpage>&#x2013;<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1007/s10919-012-0133-x</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Laidlaw</surname>
<given-names>K. E. W.</given-names>
</name>
<name>
<surname>Foulsham</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kuhn</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Potential Social Interactions Are Important to Social Attention</article-title>. <source>Proc. Natl. Acad. Sci.</source> <volume>108</volume> (<issue>14</issue>), <fpage>5548</fpage>&#x2013;<lpage>5553</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1017022108</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Land</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tatler</surname>
<given-names>B. W.</given-names>
</name>
</person-group> (<year>2009</year>). <source>Looking and Acting: Vision and Eye Movements in Natural Behaviour</source>. <publisher-name>Oxford University Press</publisher-name>. </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Langner</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Dotsch</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bijlstra</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Wigboldus</surname>
<given-names>D. H. J.</given-names>
</name>
<name>
<surname>Hawk</surname>
<given-names>S. T.</given-names>
</name>
<name>
<surname>van Knippenberg</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Presentation and Validation of the Radboud Faces Database</article-title>. <source>Cogn. Emot.</source> <volume>24</volume> (<issue>8</issue>), <fpage>1377</fpage>&#x2013;<lpage>1388</lpage>. <pub-id pub-id-type="doi">10.1080/02699930903485076</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mansour</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kuhn</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Studying "natural" Eye Movements in an "unnatural" Social Environment: The Influence of Social Activity, Framing, and Sub-clinical Traits on Gaze Aversion</article-title>. <source>Q. J.&#x20;Exp. Psychol.</source> <volume>72</volume> (<issue>8</issue>), <fpage>1913</fpage>&#x2013;<lpage>1925</lpage>. <pub-id pub-id-type="doi">10.1177/1747021818819094</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mazzarella</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Hamilton</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Trojano</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Mastromauro</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Conson</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Observation of Another&#x27;s Action but Not Eye Gaze Triggers Allocentric Visual Perspective</article-title>. <source>Q. J.&#x20;Exp. Psychol.</source> <volume>65</volume> (<issue>12</issue>), <fpage>2447</fpage>&#x2013;<lpage>2460</lpage>. <pub-id pub-id-type="doi">10.1080/17470218.2012.697905</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>MacDorman</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kageki</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>The Uncanny Valley [From the Field]</article-title>. <source>IEEE Robot. Automat. Mag.</source> <volume>19</volume> (<issue>2</issue>), <fpage>98</fpage>&#x2013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1109/MRA.2012.2192811</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mundy</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Attention, Joint Attention, and Social Cognition</article-title>. <source>Curr. Dir. Psychol. Sci.</source> <volume>16</volume> (<issue>5</issue>), <fpage>269</fpage>&#x2013;<lpage>274</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-8721.2007.00518.x</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Niforatos</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Palma</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gluszny</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Vourvopoulos</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Liarokapis</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Would You Do it?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making</article-title>. <source>Conf. Hum. Factors Comput. Syst. - Proc.</source> <volume>1</volume>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1145/3313831.3376788</pub-id> </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Hamilton</surname>
<given-names>A. F. C.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Why and How to Use Virtual Reality to Study Human Social Interaction: The Challenges of Exploring a New Research Landscape</article-title>. <source>Br. J.&#x20;Psychol.</source> <volume>109</volume> (<issue>3</issue>), <fpage>395</fpage>&#x2013;<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1111/bjop.12290</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Porter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>ten Brinke</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Reading between the Lies</article-title>. <source>Psychol. Sci.</source> <volume>19</volume> (<issue>5</issue>), <fpage>508</fpage>&#x2013;<lpage>514</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.2008.02116.x</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Porter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>ten Brinke</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Secrets and Lies: Involuntary Leakage in Deceptive Facial Expressions as a Function of Emotional Intensity</article-title>. <source>J.&#x20;Nonverbal Behav.</source> <volume>36</volume> (<issue>1</issue>), <fpage>23</fpage>&#x2013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1007/s10919-011-0120-7</pub-id> </citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Posner</surname>
<given-names>M. I.</given-names>
</name>
</person-group> (<year>1980</year>). <article-title>Orienting of Attention</article-title>. <source>Q. J.&#x20;Exp. Psychol.</source> <volume>32</volume> (<issue>1</issue>), <fpage>3</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1080/00335558008248231</pub-id> </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Risko</surname>
<given-names>E. F.</given-names>
</name>
<name>
<surname>Laidlaw</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Freeth</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Foulsham</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Social Attention with Real versus Reel Stimuli: toward an Empirical Approach to Concerns about Ecological Validity</article-title>. <source>Front. Hum. Neurosci.</source> <volume>6</volume> (<issue>May</issue>), <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2012.00143</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Risko</surname>
<given-names>E. F.</given-names>
</name>
<name>
<surname>Richardson</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Breaking the Fourth Wall of Cognitive Science</article-title>. <source>Curr. Dir. Psychol. Sci.</source> <volume>25</volume> (<issue>1</issue>), <fpage>70</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1177/0963721415617806</pub-id> </citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ristic</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Friesen</surname>
<given-names>C. K.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Are Eyes Special? it Depends on How You Look at it</article-title>. <source>Psychon. Bull. Rev.</source> <volume>9</volume> (<issue>3</issue>), <fpage>507</fpage>&#x2013;<lpage>513</lpage>. <pub-id pub-id-type="doi">10.3758/BF03196306</pub-id> </citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ruhland</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Andrist</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Badler</surname>
<given-names>J.&#x20;B.</given-names>
</name>
<name>
<surname>Badler</surname>
<given-names>N. I.</given-names>
</name>
<name>
<surname>Gleicher</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). <article-title>A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception</article-title>. <source>Computer Graphics Forum</source> <volume>34</volume> (<issue>6</issue>), <fpage>299</fpage>&#x2013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1111/cgf.12603</pub-id> </citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scaife</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bruner</surname>
<given-names>J.&#x20;S.</given-names>
</name>
</person-group> (<year>1975</year>). <article-title>The Capacity for Joint Visual Attention in the Infant</article-title>. <source>Nature</source> <volume>253</volume> (<issue>5489</issue>), <fpage>265</fpage>&#x2013;<lpage>266</lpage>. <pub-id pub-id-type="doi">10.1038/253265a0</pub-id> </citation>
</ref>
<ref id="B46">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slessor</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Laird</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Phillips</surname>
<given-names>L. H.</given-names>
</name>
<name>
<surname>Bull</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Filippou</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Age-related Differences in Gaze Following: Does the Age of the Face Matter?</article-title> <source>Journals Gerontol. Ser. B: Psychol. Sci. Soc. Sci.</source> <volume>65B</volume> (<issue>5</issue>), <fpage>536</fpage>&#x2013;<lpage>541</lpage>. <pub-id pub-id-type="doi">10.1093/geronb/gbq038</pub-id> </citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stephenson</surname>
<given-names>L. J.</given-names>
</name>
<name>
<surname>Edwards</surname>
<given-names>S. G.</given-names>
</name>
<name>
<surname>Bayliss</surname>
<given-names>A. P.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>From Gaze Perception to Social Cognition: The Shared-Attention System</article-title>. <source>Perspect. Psychol. Sci.</source> <volume>16</volume>, <fpage>553</fpage>&#x2013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1177/1745691620953773</pub-id> </citation>
</ref>
<ref id="B48">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tipples</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Eye Gaze Is Not Unique: Automatic Orienting in Response to Uninformative Arrows</article-title>. <source>Psychon. Bull. Rev.</source> <volume>9</volume> (<issue>2</issue>), <fpage>314</fpage>&#x2013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.3758/BF03196287</pub-id> </citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tipples</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Orienting to Counterpredictive Gaze and Arrow Cues</article-title>. <source>Perception &#x26; Psychophysics</source> <volume>70</volume> (<issue>1</issue>), <fpage>77</fpage>&#x2013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.3758/PP.70.1.77</pub-id> </citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zaki</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ochsner</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>The Need for a Cognitive Neuroscience of Naturalistic Social Cognition</article-title>. <source>Ann. New York Acad. Sci.</source> <volume>1167</volume> (<issue>1</issue>), <fpage>16</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2009.04601.x</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>