<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Virtual Real.</journal-id>
<journal-title>Frontiers in Virtual Reality</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Virtual Real.</abbrev-journal-title>
<issn pub-type="epub">2673-4192</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">695312</article-id>
<article-id pub-id-type="doi">10.3389/frvir.2021.695312</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Virtual Reality</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Move The Object or Move The User: The Role of Interaction Techniques on Embodied Learning in VR</article-title>
<alt-title alt-title-type="left-running-head">Bagher et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Embodied Learning in VR</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Bagher</surname>
<given-names>Mahda M.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1296523/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Sajjadi</surname>
<given-names>Pejman</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1370262/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wallgr&#x00FC;n</surname>
<given-names>Jan Oliver</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>La Femina</surname>
<given-names>Peter C.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1498844/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Klippel</surname>
<given-names>Alexander</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1306216/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Center for Immersive Experiences, Department of Geography, The Pennsylvania State University (PSU), <addr-line>University Park</addr-line>, <addr-line>PA</addr-line>, <country>United&#x20;States</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Department of Geosciences, The Pennsylvania State University, <addr-line>University Park</addr-line>, <addr-line>PA</addr-line>, <country>United&#x20;States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/799146/overview">Amela Sadagic</ext-link>, Naval Postgraduate School, United&#x20;States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/600392/overview">Andri Ioannou</ext-link>, Cyprus University of Technology, Cyprus</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/777714/overview">Missie Smith</ext-link>, Independent researcher, Detroit, MI, United&#x20;States</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Mahda M. Bagher, <email>mmm6749@psu.edu</email>; Pejman Sajjadi, <email>sfs5919@psu.edu</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Virtual Reality and Human Behaviour, a section of the journal Frontiers in Virtual Reality</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>10</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>2</volume>
<elocation-id>695312</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>04</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>09</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Bagher, Sajjadi, Wallgr&#x00FC;n, La Femina and Klippel.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Bagher, Sajjadi, Wallgr&#x00FC;n, La Femina and Klippel</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>To incorporate immersive technologies as part of the educational curriculum, this article is an endeavor to investigate the role of two affordances that are crucial in designing embodied interactive virtual learning environments (VLEs) to enhance students&#x2019; learning experience and performance: 1) the sense of presence as a subjective affordance of the VR system, and 2) bodily engagement as an embodied affordance and the associated sense of agency that is created through interaction techniques with three-dimensional learning objects. To investigate the impact of different design choices for interaction, and how they would affect the associated sense of agency, learning experience and performance, we designed two VLEs in the context of penetrative thinking in a critical 3D task in geosciences education: understanding the cross-sections of earthquakes&#x2019; depth and geometry in subduction zones around the world. Both VLEs were web-based desktop VR applications containing 3D data that participants ran remotely on their own computers using a normal screen. In the drag and scroll condition, we facilitated bodily engagement with the 3D data through object manipulation, object manipulation. In the first-person condition, we provided the ability for the user to move in space. In other words, we compared moving the objects or moving the user in space as the interaction modalities. We found that students had a better learning experience in the drag and scroll condition, but we could not find a significant difference in the sense of presence between the two conditions. Regarding learning performance, we found a positive correlation between the sense of agency and knowledge gain in both conditions. In terms of students with low prior knowledge of the field, exposure to the VR experience in both conditions significantly improved their knowledge gain. In the matter of individual differences, we investigated the knowledge gain of students with a low penetrative thinking ability. We found that they benefited from the type of bodily engagement in the first-person condition and had a significantly higher knowledge gain than the other condition. Our results encourage in-depth studies of embodied learning in VR to design more effective embodied virtual learning environments.</p>
</abstract>
<kwd-group>
<kwd>virtual reality</kwd>
<kwd>embodied learning</kwd>
<kwd>embodiment</kwd>
<kwd>bodily engagement</kwd>
<kwd>interaction technique</kwd>
<kwd>virtual learning environments</kwd>
<kwd>penetrative thinking</kwd>
<kwd>3D visualization</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Extended Reality (XR) technologies have become more accessible in terms of costs and required hardware and software and have gained attention and popularity in education (e.g., <xref ref-type="bibr" rid="B13">Dalgarno et&#x20;al., 2011</xref>; <xref ref-type="bibr" rid="B8">Bulu, 2012</xref>; <xref ref-type="bibr" rid="B37">Merchant et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B33">Legault et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B30">Klippel et&#x20;al., 2019</xref>). Recent advances in XR technologies have created an interest in investigating the role of cognitively motivated principles in designing virtual learning environments (VLEs) for education (e.g., <xref ref-type="bibr" rid="B14">Dalgarno and Lee, 2010</xref>; <xref ref-type="bibr" rid="B1">Lee et&#x20;al., 2010</xref>; <xref ref-type="bibr" rid="B23">Johnson-Glenberg et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B9">Clifton et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B68">Yeonhee, 2018</xref>). There have been numerous efforts from various communities (e.g., IEEE ICICLE<xref ref-type="fn" rid="FN1">
<sup>1</sup>
</xref> and The Immersive Learning Research Network (iLRN)<xref ref-type="fn" rid="FN2">
<sup>2</sup>
</xref>) to incorporate the technology-enhanced educational curriculum into classrooms, to overcome the limitations of learning technologies, and to design engaging and compelling learning experiences. The learning efficacy of these experiences is a product of their design, which in turn predicts the experiences of users (<xref ref-type="bibr" rid="B14">Dalgarno and Lee, 2010</xref>; <xref ref-type="bibr" rid="B9">Clifton et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B21">Jerald, 2016</xref>; <xref ref-type="bibr" rid="B12">Czerwinski et&#x20;al., 2020</xref>). Among the various aspects that should be considered when designing an interactive virtual environment for learning, embodiment is argued to be one of the main contributors (<xref ref-type="bibr" rid="B7">Biocca, 1999</xref>; <xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>; <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al., 2020</xref>). Within a rich body of research on the role of embodiment in spatial learning, thinking, and reasoning (e.g., <xref ref-type="bibr" rid="B38">Mou and McNamara, 2002</xref>; <xref ref-type="bibr" rid="B65">Wilson, 2002</xref>; <xref ref-type="bibr" rid="B19">Hegarty et&#x20;al., 2006</xref>; <xref ref-type="bibr" rid="B20">Hostetter and Alibali, 2008</xref>; <xref ref-type="bibr" rid="B28">Kelly and McNamara, 2008</xref>; <xref ref-type="bibr" rid="B27">Kelly and McNamara, 2010</xref>; <xref ref-type="bibr" rid="B42">Paas and Sweller, 2012</xref>; <xref ref-type="bibr" rid="B51">Shapiro, 2014</xref>; <xref ref-type="bibr" rid="B43">Plummer et&#x20;al. (2016)</xref>), there is a growing interest in investigating the role of embodiment in the design of VLEs as an essential factor influencing immersive learning (e.g., <xref ref-type="bibr" rid="B29">Kilteni et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B34">Lindgren and Johnson-Glenberg, 2013</xref>; <xref ref-type="bibr" rid="B23">Johnson-Glenberg et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B35">Lindgren et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B9">Clifton et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>; <xref ref-type="bibr" rid="B33">Legault et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B58">Southgate, 2020</xref>; <xref ref-type="bibr" rid="B2">Bagher, 2020</xref>).</p>
<p>This growing body of research examines the extent to which embodied learning in a virtual environment would enhance learning outcomes and improve learners&#x2019; spatial memory. Researchers in various fields have defined embodiment in different ways (<xref ref-type="bibr" rid="B29">Kilteni et&#x20;al., 2012</xref>) and focused on numerous aspects, from body representation to the type of bodily engagement or the degree of embodiment. One common goal is to find out what type or degree of embodiment is beneficial in designing engaging and effective learning experiences in XR, especially virtual reality (<xref ref-type="bibr" rid="B29">Kilteni et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B44">Repetto et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B26">Johnson-Glenberg et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>; <xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>; <xref ref-type="bibr" rid="B58">Southgate, 2020</xref>; <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al., 2020</xref>).</p>
<p>In this article, our focus is not the degree of embodiment but one of the <italic>affordances</italic> that play a key role in inducing the sense of embodiment (SOE) in VR. We investigate the extent to which bodily engagement (as an embodied affordance) contributes to SOE in VLEs and can affect learning experience and performance. Affordances are defined as &#x201c;potential interactions with the environment&#x201d; (<xref ref-type="bibr" rid="B65">Wilson, 2002</xref>, p.625). Different VR systems can afford different levels of <italic>sensorimotor contingencies</italic> depending on the system characteristics and the design choices for creating the learning environment. Sensorimotor contingencies refer to when we take certain actions to change our perception and interact with an environment, including but not limited to a virtual environment (<xref ref-type="bibr" rid="B32">Lee, 2004</xref>; <xref ref-type="bibr" rid="B54">Slater, 2009</xref>; <xref ref-type="bibr" rid="B55">Slater et&#x20;al., 2010</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>). <xref ref-type="bibr" rid="B23">Johnson-Glenberg et&#x20;al. (2014)</xref> refer to this as <italic>motor engagement</italic>. In this article, we use the term <italic>bodily engagement</italic> suggested by <xref ref-type="bibr" rid="B53">Skulmowski and Rey (2018)</xref> as this term entails a type of engagement that extends beyond the mind and considers the interaction between mind, body, and the environment (<xref ref-type="bibr" rid="B65">Wilson, 2002</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>). When the learning activities in a virtual environment are designed to engage the senses (i.e.,&#x20;vision) and motor engagement (i.e.,&#x20;body parts), the users experience higher engagement with those activities. As a result, they can be more embodied in the environment (<xref ref-type="bibr" rid="B7">Biocca, 1999</xref>; <xref ref-type="bibr" rid="B21">Jerald, 2016</xref>). The level of bodily engagement depends on the number of sensory systems engaged and whether the tasks are designed around meaningful activities. Bodily engagement can further affect memory trace and knowledge gain (<xref ref-type="bibr" rid="B26">Johnson-Glenberg et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>).</p>
<p>To examine the effect of bodily engagement on learning experience and performance, we focus on the design choices for bodily engagement in the same learning context with the same level of embodiment rather than evaluating the medium effect on learning. We have designed an experiment with two VLEs. These VLEs are web-based desktop VR applications. Web-based desktop VR refers to a desktop VR experience perceived via a standard screen delivered via a web browser. We argue that the type of 3D interaction for manipulation of virtual objects matters (<xref ref-type="bibr" rid="B64">Weise et&#x20;al., 2019</xref>). In a recent study by <xref ref-type="bibr" rid="B22">Johnson-Glenberg et&#x20;al. (2021)</xref> comparing immersive VR and a desktop VR with two levels of embodiment (low: passive video watching, high: interacting with the learning content), they found that the design is far important than the platform. The critical finding is that the way a learning environment is designed based on the presence or absence of interaction techniques matters in learning.</p>
<p>To carry out this research, first, we investigate the following questions: Does the type of interaction technique affect the level of bodily engagement and associated sense of agency? And does the type of interaction technique affect the sense of presence? To answer these questions, we look into 1) bodily engagement through two different interaction techniques and the associated sense of agency, and 2) the created sense of presence as the subjective or psychological affordance of the VR system (<xref ref-type="bibr" rid="B56">Slater and Wilbur, 1997</xref>; <xref ref-type="bibr" rid="B46">Ruscella and Obeid, 2021</xref>). We hypothesize that the design choices for the interaction technique influence the level of bodily engagement and the level of control over the learning environment that creates the sense of agency. This sense of agency can further affect the overall experienced sense of presence (<xref ref-type="bibr" rid="B40">Nowak and Biocca, 2003</xref>). Furthermore, presence, in return, has an effect on the level of bodily engagement and learning in VR (<xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>). Extensive research has been carried out on the sense of presence as a psychological affordance of a VR system (e.g., <xref ref-type="bibr" rid="B56">Slater and Wilbur, 1997</xref>; <xref ref-type="bibr" rid="B67">Witmer and Singer, 1998</xref>; <xref ref-type="bibr" rid="B50">Schuemie et&#x20;al., 2001</xref>; <xref ref-type="bibr" rid="B32">Lee, 2004</xref>; <xref ref-type="bibr" rid="B47">Sanchez-Vives and Slater, 2005</xref>; <xref ref-type="bibr" rid="B66">Wirth et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B49">Schubert, 2009</xref>; <xref ref-type="bibr" rid="B55">Slater et&#x20;al., 2010</xref>; <xref ref-type="bibr" rid="B8">Bulu, 2012</xref>; <xref ref-type="bibr" rid="B4">Bailey et&#x20;al., 2012</xref>)</p>
<p>The goal of the VLEs used in this study is to support penetrative thinking in the &#x201c;Discovering Plate Boundaries<xref ref-type="fn" rid="FN3">
<sup>3</sup>
</xref>&#x201d; lab in an introductory physical geology course. In short, penetrative thinking is the ability to visualize a 2D profile of three-dimensional data. In designing and incorporating the VLEs into the plate boundaries lab exercise, we explore these research questions: Do interaction techniques affect learning experience and performance? And is one interaction technique superior to the other for students with a low penetrative thinking ability in terms of knowledge gain? We hypothesize that the interaction technique affects the learning experience and performance in the context of penetrative thinking in VR as a type of spatial learning. In a pilot study (<xref ref-type="bibr" rid="B3">Bagher et&#x20;al., 2020</xref>) conducted in the Fall 2019, we focused on the 3D visualization of the US Geological Survey&#x2019;s Centennial Earthquake Catalog (<xref ref-type="bibr" rid="B45">Ritzwoller et&#x20;al., 2002</xref>) as a case study and immersive VR (IVR) using Head-Mounted Displays (HMDs) as an embodied and interactive learning experience. The pilot study focused on comparing IVR with the traditional teaching approach (using 2D maps) to determine whether IVR as an interactive 3D learning environment is superior to the traditional teaching methods. Due to the unprecedented event of the epidemic of COVID-19 during Fall 2020, physical attendance at the labs and using VR headsets (HMDs) was affected. Therefore, we created two VLEs based on virtual web-based desktop applications that presented the 3D visualization of the earthquake locations on a 2D interface with different interaction techniques. The use of a web browser was to give accessibility to students to attend the experiment from home. We incorporated the virtual learning environments into the curriculum to teach plate boundaries and earthquake locations, and they were the only method of learning available for the lab exercise. Therefore, this study explores whether the design of the interaction techniques used in the VLEs would affect learning experience and performance when VR is the established method of learning in the&#x20;lab.</p>
<p>In the rest of the article, we first discuss the background of our research. Then, we discuss the design and implementation of the experiment. After reporting the results, we discuss their implications on learning experience, user experience, and learning performance. Then we address the limitations of the study and future directions for this research.</p>
</sec>
<sec id="s2">
<title>2 Background</title>
<sec id="s2-1">
<title>2.1 Sense of Embodiment</title>
<p>Embodied learning theory (<xref ref-type="bibr" rid="B59">Stolz, 2015</xref>; <xref ref-type="bibr" rid="B57">Smyrnaiou et&#x20;al., 2016</xref>), as a pedagogical approach rooted in embodied cognitive science, seeks to expand the application of embodied cognition into education. Embodiment is experiencing and interacting with the world through our bodies, suggesting that mind and body are linked (<xref ref-type="bibr" rid="B65">Wilson, 2002</xref>; <xref ref-type="bibr" rid="B29">Kilteni et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B57">Smyrnaiou et&#x20;al., 2016</xref>). Therefore, in contrast to traditional cognitive science, embodied cognition explains how body and environment are related to cognitive processes (<xref ref-type="bibr" rid="B6">Barsalou, 1999</xref>; <xref ref-type="bibr" rid="B5">Barsalou, 2008</xref>; <xref ref-type="bibr" rid="B52">Shapiro, 2007</xref>; <xref ref-type="bibr" rid="B51">Shapiro, 2014</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>). Embodiment is rooted in human perception and motor systems and through the body&#x2019;s interaction with the world rather than only relying on abstract symbolic and internal representations (<xref ref-type="bibr" rid="B6">Barsalou, 1999</xref>; <xref ref-type="bibr" rid="B65">Wilson, 2002</xref>; <xref ref-type="bibr" rid="B63">Waller and Greenauer, 2007</xref>; <xref ref-type="bibr" rid="B52">Shapiro, 2007</xref>, <xref ref-type="bibr" rid="B51">Shapiro, 2014</xref>). In recent years, the design of embodied interfaces, including immersive experiences, has captured the attention of researchers in different fields in an attempt to improve embodied learning (e.g., <xref ref-type="bibr" rid="B14">Dalgarno and Lee, 2010</xref>; <xref ref-type="bibr" rid="B23">Johnson-Glenberg et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B9">Clifton et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B68">Yeonhee, 2018</xref>; <xref ref-type="bibr" rid="B12">Czerwinski et&#x20;al., 2020</xref>). To conceptualize embodiment in the context of virtual reality, we should define how SOE is constructed based on embodied mental representations. SOE is a psychological response to being situated in the space in relation to other objects and the self. A virtual interface can be an extension of human senses linking the human to the virtual environment (<xref ref-type="bibr" rid="B6">Biocca, 1999</xref>; <xref ref-type="bibr" rid="B29">Kilteni et&#x20;al., 2012</xref>). In other words, SOE in VR can be defined as the integration of our senses with our technology extended bodies (<xref ref-type="bibr" rid="B6">Biocca, 1999</xref>).</p>
<p>Among research studies focused on embodiment in VR, some have focused on defining different contributing factors to the embodiment. For instance, <xref ref-type="bibr" rid="B29">Kilteni et&#x20;al. (2012)</xref> define the sense of embodiment as a result of the sense of self-location, the sense of agency, and the sense of body ownership. Some researchers (e.g., <xref ref-type="bibr" rid="B16">Gonzalez-Franco and Peck, 2018</xref>) focus on the role of the body as an avatar and its effect on the sense of body ownership and agency. In another example, <xref ref-type="bibr" rid="B58">Southgate (2020)</xref> conceptualizes embodiment in virtual learning from different angles focusing on various representations of the body such as cyborg body, naturalistic body, political body, etc. Furthermore, several research studies are focusing on the role of bodily engagement on SOE in VR (e.g., <xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>; <xref ref-type="bibr" rid="B53">Skulmowski and Rey, 2018</xref>; <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B22">Johnson-Glenberget al., 2021</xref>). <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al. (2020)</xref> defined two affordances for designing VR for learning: 1) the sensation of presence, and 2) embodiment and the agency linked with manipulating objects in 3D. They define embodiment as a meaningful interaction with the learning content through bodily engagement. In another study by <xref ref-type="bibr" rid="B26">Johnson-Glenberg et&#x20;al. (2016)</xref>, they found that embodiment and sensorimotor feedback can increase knowledge retention in some types of knowledge. <xref ref-type="bibr" rid="B22">Johnson-Glenberg et&#x20;al. (2021)</xref> compared passive learning (watching a video) vs. active learning through embodied interactions on a 2D platform and an immersive VR (Oculus Go). In all conditions, users sit. In the active learning scenario, using a mouse on a 2D desktop and controllers in an immersive VR platform is highly embodied. Watching a video on both platforms is considered low embodied. Therefore, the user has the same level of bodily engagement both in VR and a 2D desktop when assigned to active learning. They found a significant main effect for embodiment regardless of the platform. Participants in high embodied conditions learned the most. <xref ref-type="bibr" rid="B71">Zielasko and Riecke (2021)</xref> carry out a systematic analysis with VR experts in a workshop to find out the effect of body posture and embodied interactions on various VR experiences such as engagement, enjoyment, comfort, and accessibility. They also found higher embodied locomotion cues for walking rather than sitting. Among other research studies focusing on interaction techniques, locomotion, and embodiment (e.g., <xref ref-type="bibr" rid="B70">Zielasko et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B64">Weise et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B15">Di Luca et&#x20;al., 2021</xref>), <xref ref-type="bibr" rid="B31">Lages and Bowman (2018)</xref> focused on the effect of manipulating objects vs physically walking in the virtual environment on performance in demanding visual tasks. They found that in designing the learning environments, the creator should consider the user controller experience, past gaming experience, and spatial ability of the&#x20;user.</p>
<p>In a desktop VR, hands movement and a mouse or a keyboard simulate bodily engagement at a lower level, giving the user the sense of being situated in the virtual environment while sitting in front of a 2D interface. We consider this form of SOE as the lower level of bodily engagement than immersive VR, where the whole body can be moved and engaged. In this article, instead of comparing the degree of embodiment, we investigate the design choices for bodily engagement in two web-based desktop VR with the same level of embodiment. We posit that different design choices for interaction techniques would affect learning experience and performance. We hypothesize that various interaction techniques can generate different levels of agency over the learning materials and result in different learning outcomes in terms of knowledge gain. Two main interaction techniques with the learning contents introduced in the literature are 1) <italic>gesture</italic>, and 2) <italic>object manipulation</italic> (<xref ref-type="bibr" rid="B42">Paas and Sweller, 2012</xref>). Several studies have explored the role of gesture as an effective bodily engagement technique in learning spatial information and offloading mental tasks to the surrounding environment (e.g., <xref ref-type="bibr" rid="B20">Hostetter and Alibali, 2008</xref>; <xref ref-type="bibr" rid="B34">Lindgren and Johnson-Glenberg, 2013</xref>; <xref ref-type="bibr" rid="B43">Plummer et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B24">Johnson-Glenberg, 2018</xref>). We propose to add a third interaction technique, 3) <italic>to move the user in space</italic>. This interaction technique creates a sense of embodied locomotion and gives the user the ability to control the rotation of the viewpoint by either stepping back in x,y,z direction and seeing an overview of the 3D objects or moving closer to inspect the 3D objects in greater detail. We are interested in examining the role of object manipulation and moving the user in space as interaction techniques contributing to bodily engagement in enhancing learning and the associated sense of agency.</p>
<sec id="s2-1-1">
<title>Bodily Engagement Through Object Manipulation</title>
<p>This interaction technique creates a sense of agency and control over the 3D objects in a three-dimensional environment. According to <xref ref-type="bibr" rid="B42">Paas and Sweller (2012)</xref> object manipulation is a source of primary knowledge that will not affect cognitive load during the learning process. The primary systems can further assist the user in acquiring secondary knowledge. Manipulating an environment can help us to solve a problem through mental structures that assists perception and action. Moreover, adding a modality like object manipulation in the immediate environment may increase the strength of memory trace and recall (<xref ref-type="bibr" rid="B6">Barsalou, 1999</xref>; <xref ref-type="bibr" rid="B65">Wilson, 2002</xref>; <xref ref-type="bibr" rid="B26">Johnson-Glenberg et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B25">Johnson-Glenberg et&#x20;al., 2020</xref>). In the recall process, in the absence of physical activity, the sensorimotor actions like object manipulation can later assist the processes of thinking and knowing by representing information or drawing inferences (<xref ref-type="bibr" rid="B6">Barsalou, 1999</xref>; <xref ref-type="bibr" rid="B65">Wilson, 2002</xref>). Working memory has a sensorimotor nature and benefits from off-loading information into perceptual and motor systems (<xref ref-type="bibr" rid="B65">Wilson, 2002</xref>). Therefore, we suggest using object manipulation to help with the cognitive load that can increase working memory capacity. Object manipulation in a web-based desktop VR can be achieved through dragging, rotation, and scroll using a mouse. Many 3D software programs use this technique to manipulate 3D content.</p>
</sec>
<sec id="s2-1-2">
<title>Bodily Engagement Through Moving the User in Space</title>
<p>Moving in space either physically in a virtual environment or through controller-based navigation in a web-based desktop VR is a cognitively demanding task. Changing perspective to create a different perception of the environment to perform a task or solve a problem is called epistemic action (<xref ref-type="bibr" rid="B20">Hostetter and Alibali, 2008</xref>). Epistemic actions are the result of sensorimotor contingencies (<xref ref-type="bibr" rid="B54">Slater, 2009</xref>; <xref ref-type="bibr" rid="B55">Slater et&#x20;al., 2010</xref>) supported by a VR system. Even though physically walking is considered to be cognitively demanding, it is considered to be the most natural interaction technique (<xref ref-type="bibr" rid="B31">Lages and Bowman, 2018</xref>). <xref ref-type="bibr" rid="B71">Zielasko and Riecke (2021)</xref> carry out a survey in which participants rated higher embodied (non-visual) locomotion cues for walking, walking in place, and arm swinging than standing, sitting, or teleportation. In a web-based desktop VR, physical walking can be replicated using a controller. Moving in space can benefit from familiarity with controller-based games (<xref ref-type="bibr" rid="B31">Lages and Bowman, 2018</xref>) such as First Person Shooter (FPS) games. In these games, the player has an egocentric view and controls the movement in space in different directions using a game controller device or a mouse and keyboard.</p>
</sec>
</sec>
<sec id="s2-2">
<title>2.2 Penetrative Thinking</title>
<p>Spatial thinking is a fundamental part of many fields of science. One of the ways students can gain a better understanding of a spatial phenomenon is through visual-spatial thinking (<xref ref-type="bibr" rid="B36">Mathewson, 1999</xref>). Adequate visualization helps students to understand the spatial representation of information better. Spatial representations can be either extrinsic (e.g., locations) or intrinsic (e.g., shapes) to objects. One of the important spatial transformations related to intrinsic characteristics of objects is the ability to visualize penetrative views and to switch between two-dimensional and three-dimensional views. The ability to understand spatial relations inside an object and transform 3D data into a 2D profile is called <italic>penetrative thinking</italic> or <italic>cross-sectioning</italic> (<xref ref-type="bibr" rid="B41">Ormand et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B39">Newcombe and Shipley, 2015</xref>; <xref ref-type="bibr" rid="B17">Hannula, 2019</xref>). <xref ref-type="fig" rid="F1">Figure&#x20;1</xref> shows a penetrative thinking ability test to test students&#x2019; ability on mental slicing of a 3D geologic structure in a block diagram (<xref ref-type="bibr" rid="B41">Ormand et&#x20;al., 2014</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Geologic Block Cross-sectioning Test for measuring students&#x2019; ability on mental slicing of a 3D geologic structure in a block diagram. The GBCT post-study test re-published from (<xref ref-type="bibr" rid="B41">Ormand et&#x20;al., 2014</xref>).</p>
</caption>
<graphic xlink:href="frvir-02-695312-g001.tif"/>
</fig>
<p>In domains such as geosciences, students usually visualize the 3D structure of objects presented on 2D interfaces (e.g., desktop computers) and then extract 2D profiles from the 2D representation of the data. For instance, phenomena and observations related to plate tectonics are inherently three-dimensional, yet are often plotted on 2D maps. In introductory geoscience courses, students are often trained to visualize 3D data by learning how to read 2D maps and block diagrams. For instance, this method of representation makes it difficult for some students to visualize the depth, extent, and geometry of earthquakes as they have different levels of penetrative thinking abilities. A 3D representation of the data can aid in better understanding the extent, shape, and cross-sections of the data. As an example, <xref ref-type="fig" rid="F2">Figure&#x20;2</xref> shows the cross-section of earthquakes and volcanoes across South America. Drawing a cross-section based on a 3D visualization of data can be much easier than seeing the 2D representation of data, imagining the 3D visualization, and then extracting the 2D profile.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>An example of a plot drawn in an introductory geoscience course: cross-section of earthquakes and volcanoes in South America. Circles show the location of earthquakes and triangles show the location of volcanoes with distance from the trench.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g002.tif"/>
</fig>
</sec>
<sec id="s2-3">
<title>2.3 Sense of Embodiment in The Context of Penetrative Thinking</title>
<p>This research examines whether penetrative thinking as a topic in spatial learning can benefit from embodied learning. We incorporate embodied interactions with the 3D visualization of the data (earthquakes, volcanoes, and plate boundaries) to enhance students&#x2019; ability in visualizing penetrative views and better understand the cross-section or profile of the data in different regions around the world. To evaluate the role of bodily engagement through different interaction techniques introduced in <xref ref-type="sec" rid="s2-1">Section 2.1</xref>, object manipulation and moving the user in space, in a penetrative thinking exercise, we compared the two design choices by providing two VLEs in the form of web-based desktop VR applications. These VLEs are designed to create an interactive environment to support penetrative thinking in an introductory physical geology course to facilitate visualization of the distribution and depth of earthquakes around the world. Full bodily engagement and a higher level of embodiment can be achieved in an immersive VR using Head-Mounted Displays (HMDs). In a web-based desktop VR application, a lower level of bodily engagement can be created through hand movements and the use of a device like a mouse or a keyboard.</p>
<p>In the first condition, where bodily engagement is induced through object manipulation, students do not actively move in the environment. They move and manipulate all the 3D objects together by dragging, rotating, or zooming in/out. This manipulation technique helps the students to get closer to a specific location along x,y,z direction, where they can observe a specific subduction zone. In this condition, students have complete control over manipulating all 3D objects at the same time. They can switch between different datasets but they cannot manipulate each object individually (i.e, individual earthquake locations or volcanoes). We refer to this visualization as <italic>the drag and scroll</italic> condition (<xref ref-type="sec" rid="s13">Supplementary Video S1</xref>). This interaction technique is similar to what is experienced in conventional 3D editors or geoscience software programs such as ArcScene<xref ref-type="fn" rid="FN4">
<sup>4</sup>
</xref>.</p>
<p>In the second condition, where the bodily engagement is induced through moving the user in space and creating a sense of locomotion, students rotate the viewpoint to the desired direction (along x,y,z axes) and move farther and closer to the 3D objects to inspect their spatial arrangement and their associated information. In this condition, the user can move in space and change the direction of the viewpoint in the virtual environment in a natural way (similar to what is experienced in conventional first-person camera views in games). In this condition, we manipulate the position and rotation of the first-person camera in VR to create a sense of egocentric movement in space. The first-person camera manipulation is designed based on the rotation of the camera using the mouse for determining the direction of the viewpoint and the arrow keys on the keyboard to translate in that direction. we refer to this condition as the <italic>first-person</italic> condition (see the <xref ref-type="sec" rid="s13">Supplementary Video S1</xref>). This type of interaction technique in a web-based desktop VR is the closest type of simulation that we could create to induce the sense of locomotion compared to physical walking in an immersive VR using HMDs. Based on these definitions, the main difference between these two interaction techniques is the design choice of moving the 3D objects or moving the&#x20;user.</p>
</sec>
</sec>
<sec id="s3">
<title>3 The Experiment</title>
<p>This research examines the role of bodily engagement as an embodied affordance on users&#x2019; learning experience and performance. To conduct this research, two types of interaction techniques have been defined that can affect bodily engagement and the associated sense of agency. At the time of epidemic of COVID-19, when the use of HMDs became limited for safety reasons, designing web-based desktop VR applications that are accessible via web browsers gave students the flexibility of going through the exercise at home on their personal computers. We designed two web-based VLEs to explore how the design choices of interaction techniques can affect bodily engagement, agency, learning experience, and performance. As a case study, we visualized 3D earthquake locations around the world representing the USGS Centennial Earthquake Catalog (<xref ref-type="bibr" rid="B45">Ritzwoller et&#x20;al., 2002</xref>) and Holocene volcanoes (<xref ref-type="bibr" rid="B61">Venzke, 2013</xref>) in the context of plate boundaries (<xref ref-type="bibr" rid="B10">Coffin et&#x20;al., 1997</xref>).</p>
<p>
<xref ref-type="fig" rid="F3">Figure&#x20;3</xref> shows the top-down view of the web-based desktop VR applications and <xref ref-type="fig" rid="F4">Figure&#x20;4</xref> shows an egocentric view. The design of each VLEs is the same in terms of data visualization. What makes the two different is how interaction with the datasets is realized, which can be shown in a recorded video but not in a figure.<xref ref-type="fn" rid="FN5">
<sup>5</sup>
</xref> The first VLE uses a mouse to drag, rotate and zoom in/out of the 3D visualization of the earthquakes and volcanoes. We refer to this visualization as the drag and scroll condition. The second VLE uses a mouse to define the direction of the viewpoint and the keyboard&#x2019;s arrow keys to translate in the environment. We name this 3D visualization as the first-person condition.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Top-down view of the web-based desktop VR application showing the world map, plate boundaries, earthquakes and volcanoes. <xref ref-type="fig" rid="F5">Figure&#x20;5</xref> shows the legend.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Egocentric view of the USGS Centennial Earthquake Catalog and Holocene Volcanoes. <xref ref-type="fig" rid="F5">Figure&#x20;5</xref> shows the legend.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g004.tif"/>
</fig>
<p>Considering these two experimental conditions, this study investigates the following hypotheses in two area of interests: learning experience and learning performance.</p>
<p>
<bold>Learning Experience:</bold>
</p>
<p>
<bold>H1.</bold> Students in the first-person condition experience a higher sense of presence.</p>
<p>
<bold>H2.</bold> Students in the drag and scroll condition have higher control over the learning materials and as a result experience more agency.</p>
<p>
<bold>H3.</bold> Students report a higher level of perceived learning in the drag and scroll condition.</p>
<p>
<bold>H4.</bold> Students with a higher level of Visual Spatial imagery ability experience a higher sense of presence regardless of the condition.</p>
<p>
<bold>Learning Performance:</bold>
</p>
<p>
<bold>H5.</bold> Students&#x2019; learning performance with low knowledge of the field improves after going through the experiences regardless of the conditions.</p>
<p>
<bold>H6.</bold> Students&#x2019; level of control positively affects their learning performance regardless of the condition.</p>
<p>
<bold>H7.</bold> Students with higher penetrative thinking ability show higher learning performance regardless of the condition.</p>
<p>
<bold>H8.</bold> Students with lower penetrative thinking ability perform better in the first-person condition.</p>
<sec id="s3-1">
<title>3.1 System Design</title>
<p>The data used to realize the visualizations in both conditions is the USGS Centennial Earthquake Catalog, which is a global catalog of well-located earthquakes from 1900 to 2008 that allows for the investigation of the depth and lateral extent of seismicity at plate boundaries (<xref ref-type="bibr" rid="B10">Coffin et&#x20;al., 1997)</xref>. To complement the earthquake locations and further connect the exercise to plate tectonics and plate boundary zones, maps of the current plate boundaries and the location of Holocene (i.e.,&#x20;&#x3c; 10,000&#xa0;years) volcanoes are also provided. <xref ref-type="fig" rid="F5">Figure&#x20;5</xref> shows the information provided in both conditions: 1) the three main plate boundary types; 2) horizontal scale in km; 3) the depth of the earthquakes: depth is less than 35&#xa0;km; depth is between 35 and 70&#xa0;km; depth is between 70 and 150&#xa0;km; depth is between 150 and 350&#xa0;km; depth is between 350 and 550&#xa0;km; depth is between 550 and 720&#xa0;km; 4) volcanoes: in subduction zones, in rift zones, and intraplate settings. The original format of the USGS Centennial Earthquake Catalog was a text file and for the Holocene Volcano, the original format was an Excel XML, both containing several values including X, Y, Z. The coordinates stored in the tables were imported into ArcGIS Pro<xref ref-type="fn" rid="FN6">
<sup>6</sup>
</xref> as XY point data using the XY Table to Point&#x20;tool.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Legend of the data visualized in the application, including plate boundaries, earthquakes and volcanoes.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g005.tif"/>
</fig>
<p>The shapefiles were imported into Blender (<xref ref-type="bibr" rid="B11">Community, 2018</xref>) using a Blender importer called BlenderGIS<xref ref-type="fn" rid="FN7">
<sup>7</sup>
</xref>. Then they were imported to Unity3D&#xae;<xref ref-type="fn" rid="FN8">
<sup>8</sup>
</xref> as FBX files. The earthquakes and volcanoes were visualized in the form of point clouds and were properly georeferenced. To overcome the performance limitation of rendering a large dataset (a total of 13,077 points for earthquakes) in VR, we used the particle system of Unity3D to generate points to have a more efficient and performant experience. Plate boundaries were visualized in the form of lines overlaid on the world map. Using these datasets, students can examine different subduction zone plate tectonics in terms of the locations and depths of the earthquakes.</p>
<p>The two different interaction techniques (one per condition) with the datasets were implemented in Unity3D. In both conditions, the users can switch between the earthquake and volcanoes datasets or enable both at the same time. Furthermore, they can access the label and other information of the data by opening showing/hiding a legend of the dataset. There is a scale bar next to the map to help users with the perception of distances. In the drag and scroll condition, the view of the users (i.e.,&#x20;the camera) orbits around a pivot point (starting at the center of the scene) using a common drag and movement functionality with the right mouse button, allowing the user to rotate the viewpoint. In addition, the pivot point can be moved within the 3D space of the scene along the X, Y, and Z axes using the drag and movement functionality with the left mouse button. Doing so would enable the users to move along these axes, and consequently orbit around the new pivot position. In the first-person condition, the users will use a combination of mouse and keyboard to perform a smooth translation along the X, Y, and Z axes using the WASD (or arrow) keys on the keyboard, while changing the direction of the movement based on the rotation of the camera using the mouse (i.e.,&#x20;steering which direction to move to with the mouse while the force is applied to that direction via the keyboard keys). The locomotion techniques in the conditions are very similar in nature (virtual travel and view point manipulation), but the two conditions are different in the mechanics of interaction used for locomotion. The drag and scroll condition simulates the interaction mechanics in software like ArcGIS, and the first-person condition simulates the interaction mechanics found in typical first-person shooter&#x20;game.</p>
</sec>
<sec id="s3-2">
<title>3.2 Participants</title>
<p>236 students from two separate sections of an introductory physical geology course were invited to participate in this study in the Fall of 2020. The experience was embedded into the course as a lab assignment. Using a web-page, students selected whether they would like to take part in the research or only do the exercise as a lab assignment. From the 177 students who agreed to participate in the study, 96 students were randomly assigned to the drag and scroll condition and 81 students to the first-person condition. The section enrollment of participants was anonymized during the condition assignment to control for the environmental factors. All participants were compensated with extra course credit for their participation. 29.94% of the participants were female, 67.79% male, and less than 3% declared were non-binary or gender-nonconforming. The average age of the students was 19.45, with a maximum age of 21 and a standard deviation of 0.83. Also, 73.44% of the participants were majoring in Engineering.</p>
</sec>
<sec id="s3-3">
<title>3.3 Measures and Tests</title>
<p>To measure learning experience and knowledge gain, two types of questions were used in this study: 1) standardized measures, and 2) knowledge tests. Several existing standardized measures were incorporated into the pre-, and post-study questionnaires. Except for the demographic and background questions, all measures were of the type Likert-scale (ranging from 1 to 5 with 5 being the most positive), open-ended or multiple choice.</p>
<p>The pre-study questionnaire was comprised of the following measures:</p>
<p>&#x2022; Demographics and background-related questions about gender, age, major and minor fields of study, and the year of&#x20;study.</p>
<p>&#x2022; A self-report measure of individual differences in terms of visual imagery: using the Visual Spatial Imagery (VSI) from MEC Spatial Presence questionnaire (<xref ref-type="bibr" rid="B62">Vorderer et&#x20;al., 2004)</xref>, with each item measured on a 1 to 5&#x20;Likert-scale. VSI is one of the spatial abilities that measures the ability to create clear spatial images and later access them from memory. People with higher VSI ability find it easier to access those spatial images from their memory (<xref ref-type="bibr" rid="B66">Wirth et&#x20;al., 2007)</xref>.</p>
<p>The post-study questionnaire was used to assess the learning experience of participants in light of the sense of presence and the sense of agency. Furthermore, the perceived learning experience of participants was measured.<list list-type="simple">
<list-item>
<p>&#x2022; For measuring the sense of presence, we used the 6-item metric of Spatial Situation Model (SSM) from the MEC Spatial Presence Questionnaire (<xref ref-type="bibr" rid="B62">Vorderer et&#x20;al., 2004</xref>). According to <xref ref-type="bibr" rid="B66">Wirth et&#x20;al. (2007)</xref>, a sense of presence can be built based on the Spatial Situation Model (SSM).</p>
</list-item>
<list-item>
<p>&#x2022; For measuring the sense of agency, we used a combination of measures including Possible Actions from the MEC Spatial Presence Questionnaire (<xref ref-type="bibr" rid="B62">Vorderer et&#x20;al., 2004</xref>) and measures suggested by <xref ref-type="bibr" rid="B1">Lee et&#x20;al. (2010)</xref> including immediacy of control, perceived ease of use, and control and active learning.</p>
</list-item>
<list-item>
<p>&#x2022; To measure perceived learning experience, we used three measures by <xref ref-type="bibr" rid="B1">Lee et&#x20;al. (2010)</xref>: reflective thinking, perceived learning effectiveness, and satisfaction. Perceived learning gives us feedback on the learning experience of students.</p>
</list-item>
<list-item>
<p>&#x2022; Two open-ended questions were used to capture the general impression of participants about what they would change in the experiment and the advantages and disadvantages of this method of learning compared to classical teaching methods in classrooms.</p>
</list-item>
</list>
</p>
<p>For the knowledge tests, a pre-study and a post-study test were designed. Besides, a test that measured the participants&#x2019; mental slicing and penetrative thinking ability was used:<list list-type="simple">
<list-item>
<p>&#x2022; The pre-study knowledge test contained six multiple-choice questions that tested students&#x2019; pre-knowledge of subduction zones and plate boundaries before going through the main experience.</p>
</list-item>
<list-item>
<p>&#x2022; In the post-study knowledge test, seven multiple-choice questions were asked from the students to test their knowledge of the subject based on their penetrative thinking ability. In the pilot study (<xref ref-type="bibr" rid="B3">Bagher et&#x20;al., 2020</xref>), we asked the students to draw by hand cross-sections plotting the depth of the earthquakes with distance from a subduction zone trench for segments of South America and Japan. Drawing a cross-section is a straightforward technique to test the students&#x2019; penetrative thinking ability in the field. In this research, due to remote participation, we could not include the same exercise. Therefore, we curated questions that not only test students&#x2019; knowledge of the subduction zones but test their penetrative thinking ability in the context of earthquake depth and distribution. For instance, we asked the student &#x201c;Below are cross-sections of seismicity versus depth for four different subduction zones. Which cross-section is most similar to the South America subduction zone?&#x201D;. The students had to use their VSI and penetrative thinking abilities to recall the cross-section of the South America subduction zone in their observation and choose one plot from multiple choices.</p>
</list-item>
<list-item>
<p>&#x2022; The Geologic Block Cross-sectioning Test (GBCT) (<xref ref-type="bibr" rid="B41">Ormand et&#x20;al., 2014</xref>) contains sixteen multiple-choice questions assessing the students&#x2019; ability to understand three-dimensional relationships by determining the correct vertical cross-section from a geologic block diagram.</p>
</list-item>
</list>
</p>
</sec>
<sec id="s3-4">
<title>3.4 Procedure</title>
<p>In both conditions, students filled out the pre-study questionnaire and then answered the pre-study knowledge test to establish their prior knowledge about the learning topic. Then, they were given information on the types of datasets they were going to explore in the VR experience and instructions on what areas to focus on. <xref ref-type="fig" rid="F6">Figure&#x20;6</xref> shows the area of interests including boxes 1&#x2013;4 and cross-section A-B.<list list-type="simple">
<list-item>
<p>
<bold>Region 1:</bold> South America</p>
</list-item>
<list-item>
<p>
<bold>Region 2:</bold> Tonga-Kermadec</p>
</list-item>
<list-item>
<p>
<bold>Region 3:</bold> Japan</p>
</list-item>
<list-item>
<p>
<bold>Region 4:</bold> Eastern Alaska</p>
</list-item>
<list-item>
<p>
<bold>Cross-section A-B:</bold> A cross-section across South American convergent margin.</p>
</list-item>
</list>
</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Area of interests for the virtual experience. Students were asked to focuse on these eareas during the virtual experience.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g006.tif"/>
</fig>
<p>Students were asked to explore and pay attention to the distribution of the earthquakes and volcanoes, and the depth range of the earthquakes in these regions while reflecting on the following questions: What do you observe with respect to these different subduction zones? Are the geometries of the subducting oceanic lithosphere the same (i.e.,&#x20;the distribution and geometry of the earthquakes) or are they different? Now, look specifically at the western margin of the South American Plate (Region 1). Is the Wadati-Benioff zone (i.e.,&#x20;the zone of seismicity that defines the subducting plate) the same north to south along the margin? They were informed that after the experience, they will be asked to answer several questions about these regions and the cross-section. In both conditions, they were given 15&#xa0;min to explore the datasets and memorize the distribution of earthquakes in the defined regions. A two-dimensional guide map on the lower right side of the screen showed the position and the direction of the user in the world map. A timer on the upper left side reminded them of the remaining time (<xref ref-type="fig" rid="F7">Figure&#x20;7</xref>). In both conditions, students could hide/show legend and instructions.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Guide map and the time counter to help the students keep track of time and navigate in the learning environment.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g007.tif"/>
</fig>
<p>After the experience, students first answered the post-study questionnaire, and then the penetrative thinking ability test. Finally, they answered the post-study knowledge test. Placing the post-study knowledge test at the end introduced a period between the experience and the post-study knowledge test. This way, we could test the effect of various embodied interactions on knowledge retention. The session, from start to end, took around 40&#xa0;min.</p>
</sec>
<sec id="s3-5">
<title>3.5 Analysis</title>
<p>For the learning experience assessment, we first identified the outliers using the Interquartile Range (IQR) method and carefully checked the dataset for removing any outliers. Then, we used Welch&#x2019;s two-sample <italic>t</italic>-tests to compare the first-person condition with the drag and scroll condition based on the learning experience measures. For the learning performance measures, when Z-scores of the pre-, and post-study knowledge tests were compared regardless of the condition, Welch&#x2019;s two-sample <italic>t</italic>-test was calculated. When we compared the post-study grades among the conditions, since the grades were ranked data, the Wilcoxon signed-rank test was used. To predict students&#x2019; sense of presence based on Visual Spatial Imagery and post-study grades based on their penetrative thinking ability, regression equations were calculated. As the number of participants in the two groups was different, Hedges&#x2019; g (<xref ref-type="bibr" rid="B18">Hedges and Olkin, 2014</xref>) was calculated instead of Cohen&#x2019;s d for the calculation of effect size. A qualitative analysis of the two open-ended questions was performed to gain a better understanding of the participants&#x2019; opinions and experiences. Based on the approach proposed by <xref ref-type="bibr" rid="B48">Schreier (2012)</xref>, two independent coders went over the responses of participants and inductively generated codes that would capture their content. Followed by consensus meetings, the codes were then grouped or rearranged into the final schema. Inter-rater reliability tests based on Cohen&#x2019;s Kappa were then calculated for the finalized results.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Results</title>
<sec id="s4-1">
<title>4.1 Learning Experience Assessment</title>
<p>
<xref ref-type="table" rid="T1">Table&#x20;1</xref> presents an overview of the mean, standard deviation, <italic>p</italic>-value, and effect size of the experience measures in the drag and scroll and the first-person conditions. As mentioned in the measures section, we measured the sense of presence, sense of agency, and perceived learning experience. There was no significant difference between the two conditions in terms of the sense of presence. Therefore, the first hypothesis (students in the first-person condition experience a higher sense of presence) is rejected. In terms of sense of agency, we measured possible actions, immediacy of control, perceived ease of use, and control and active learning, introduced in Section C. There is a significant group difference in the ease of use scores between the first-person (<italic>M</italic>&#x20;&#x3d; 3.14, <italic>SD</italic> &#x3d; 0.63) and the drag and scroll (<italic>M</italic>&#x20;&#x3d; 3.32, <italic>SD</italic> &#x3d; 0.50) conditions in favor of the drag and scroll condition, [<italic>t</italic> (153.12) &#x3d; &#x2212;1.98, <italic>p</italic>&#x20;&#x3d; 0.04]. The immediacy of control measures the students&#x2019; agency to change the view position and manipulate spatial objects. The difference for immediacy of control is very close to significant [<italic>t</italic> (174) &#x3d; &#x2212;1.77, <italic>p</italic>&#x20;&#x3d; 0.07] in favor of the drag and scroll condition (<italic>M</italic>&#x20;&#x3d; 4.07, <italic>SD</italic> &#x3d; 0.88). We could not find any significant difference between the two conditions in terms of possible actions, and control and active learning. Based on these results, we have found some evidence in favor of the second hypothesis: students in the drag and scroll condition have higher control over the learning materials and as a result experience more agency. However, we could not find significant differences in all measures related to this affordance and as a result, we cannot conclude that the second hypothesis can entirely be accepted. In terms of perceived learning, students in the drag and scroll condition (<italic>M</italic>&#x20;&#x3d; 3.41, <italic>SD</italic> &#x3d; 0.56) were significantly more satisfied [<italic>t</italic> (146) &#x3d; 1.76, <italic>p</italic>&#x20;&#x3d; 0.04] than in the first-person condition (<italic>M</italic>&#x20;&#x3d; 3.20, <italic>SD</italic> &#x3d; 0.75). We could not find any significant difference between the conditions in terms of reflective thinking and perceived learning effectiveness. Therefore, the only evidence that we could find in favor of the third hypothesis (students report a higher level of perceived learning in the drag and scroll condition) was satisfaction. Subsequently, we cannot conclude that the third hypothesis can be entirely accepted.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Overview of the learning experience measures.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left"/>
<th align="center">Measures</th>
<th align="left">Conditions</th>
<th align="left">M</th>
<th align="left">SD</th>
<th align="left">p</th>
<th align="left">Effect size</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Sense of presence</td>
<td align="center">SSM</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.669</td>
<td align="char" char=".">0.73</td>
<td align="center">0.506</td>
<td align="center">0.002</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.667</td>
<td align="char" char=".">0.85</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">Sense of agency</td>
<td align="center">Possible actions</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.50</td>
<td align="char" char=".">0.70</td>
<td align="char" char=".">0.2</td>
<td align="char" char=".">0.12</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.40</td>
<td align="char" char=".">0.88</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="center">Ease of use</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.32</td>
<td align="char" char=".">0.50</td>
<td align="char" char=".">0.048&#x2a;</td>
<td align="char" char=".">0.31</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.14</td>
<td align="char" char=".">0.63</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="center">Immediacy of control</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">4.07</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.07</td>
<td align="char" char=".">0.21</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.84</td>
<td align="char" char=".">0.87</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="center">Control and active learning</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.83</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.27</td>
<td align="char" char=".">0.133</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.96</td>
<td align="char" char=".">0.93</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">Perceived learning</td>
<td align="center">Reflective thinking</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.64</td>
<td align="char" char=".">0.73</td>
<td align="char" char=".">0.09</td>
<td align="char" char=".">0.192</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.49</td>
<td align="char" char=".">0.83</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Perceived learning effectiveness</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.61</td>
<td align="char" char=".">0.63</td>
<td align="char" char=".">0.13</td>
<td align="char" char=".">0.18</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.48</td>
<td align="char" char=".">0.79</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
<tr>
<td align="left">
</td>
<td align="center">Satisfaction</td>
<td align="left">Drag and scroll</td>
<td align="char" char=".">3.41</td>
<td align="char" char=".">0.56</td>
<td align="char" char=".">0.04&#x2a;</td>
<td align="char" char=".">0.32</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">
</td>
<td align="left">First-person</td>
<td align="char" char=".">3.20</td>
<td align="char" char=".">0.75</td>
<td align="left">
</td>
<td align="left">
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To conclude briefly, based on the discussed results, students in the drag and scroll condition had a better learning experience in terms of ease of use, immediacy of control, and satisfaction.</p>
<p>A simple linear regression was calculated to predict the effect of Visual Spatial Imagery (VSI) as a spatial ability on the sense of presence (SSM). Independent of the condition, a significant regression equation was found [<italic>F</italic> (1,175) &#x3d; 53.04, <italic>p</italic>&#x20;&#x3c; 0.001] with an adjusted R<sup>2</sup> of 0.228. Students&#x2019; sense of presence has increased by 0.64 for each unit of VSI. Therefore, hypothesis 4 can be accepted: students with a higher level of VSI experience a higher sense of presence. <xref ref-type="fig" rid="F8">Figure&#x20;8</xref> shows that in both the drag and scroll and the first-person conditions, the level of presence is dependent on the VSI spatial ability. A significant regression equation was found for the first-person condition [<italic>F</italic> (1,79) &#x3d; 28.64, <italic>p</italic>&#x20;&#x3c; 0.001] with an adjusted R<sup>2</sup> of 0.256. Students&#x2019; sense of presence has increased by 0.66 for each unit of VSI. For the drag and scroll condition, the significant regression equation is [<italic>F</italic> (1,94) &#x3d; 22.83, <italic>p</italic>&#x20;&#x3c; 0.001] with an adjusted R<sup>2</sup> of 0.186. Students&#x2019; sense of presence has increased by 0.61 for each unit of&#x20;VSI.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>The plot of VSI and SSM for each condition.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g008.tif"/>
</fig>
</sec>
<sec id="s4-2">
<title>4.2 Learning Performance Assessment</title>
<p>Before going through the experience, students answered six questions about subduction zones to test their knowledge of the subject in terms of the ability to understand the extent and geometry of the subduction zones based on their interpretation of the earthquakes, volcanoes, and plate boundaries. The total possible score was 8; The result of the test indicated an average score of 3.21 (<italic>SD</italic> &#x3d; 1.24) with a minimum score of 1 and a maximum score of 7. 56.49% of the students who attended the study obtained a score that is less than the average score. This indicates that 56.49% of the students who attended the study had lower knowledge of the field compared to average performance. The post-study knowledge test contained seven questions with a total possible score of 14. The post-study knowledge test examined the same knowledge concepts with different types of questions to evaluate whether students&#x2019; understanding of the subject has improved after going through the experience. The result of the test indicated an average score of 7.9 (<italic>SD</italic> &#x3d; 2.32) with a minimum score of 3 and a maximum score of&#x20;13.</p>
<p>Comparing the Z-scores of the pre-, and post-study knowledge tests, regardless of the condition, shows that students&#x2019; performance has improved by 0.05. However, the difference is not statistically significant: [<italic>t</italic> (176) &#x3d; 0.55, <italic>p</italic>&#x20;&#x3d; 0.58]. We were under the impression that we can detect the presence or absence of students&#x2019; knowledge gain by studying the whole sample size. However, students with higher prior knowledge have a different level of improvement than students with lower knowledge of the field. Subsequently, we decided to analyze the learning performance of students with low prior knowledge of the subject compared to the average performance (pre-test Z-score &#x2264; 0). Based on our analysis, the performance of students with low prior knowledge of the field improved significantly regardless of the conditions: [<italic>t</italic> (167) &#x3d; &#x2212;5.86, <italic>p</italic>&#x20;&#x3c; 0.001]. For the drag and scroll condition [<italic>t</italic> (52) &#x3d; &#x2212;3.34, <italic>p</italic>&#x20;&#x3c; 0.001] and for the first-person condition, [<italic>t</italic> (46) &#x3d; &#x2212;5.41, <italic>p</italic>&#x20;&#x3c; 0.001]. Therefore, hypothesis 5 is accepted: both conditions have a significantly positive effect on students with low prior knowledge of the subject and the exposure to the VLEs improved their learning performance in terms of understanding earthquakes&#x2019; distribution and depth. In other words, when students with low prior knowledge of the field were exposed to the 3D representation of the epicenters of earthquakes from the USGS Centennial Earthquake catalog and locations of Holocene volcanoes, they understood the locations, depth, and geometry of the earthquakes in subduction zones better in different regions through 3D visualization. Yet, we could not find a significant difference between the conditions in terms of knowledge gain in students with low prior knowledge of the field: [<italic>t</italic> (97.2) &#x3d; 0.94, <italic>p</italic>&#x20;&#x3d;&#x20;0.34].</p>
<p>We also analyzed the impact of the immediacy of control as one of the important measures of the sense of agency on students&#x2019; learning performance (post-study grades). The students&#x2019; post-study grades were dependent on their evaluation of the immediacy of control in both conditions. In both conditions, the higher a student felt to be in control, the higher their post-study grades were (<xref ref-type="fig" rid="F9">Figure&#x20;9</xref>). In the drag and scroll condition, a significant non-linear regression equation was found [<italic>F</italic> (1,92) &#x3d; 3.406, <italic>p</italic>&#x20;&#x3d; 0.02] with an adjusted R<sup>2</sup> of 0.07. In the first-person condition, a significant regression equation was found [<italic>F</italic> (1,77) &#x3d; 3.007, <italic>p</italic>&#x20;&#x3d; 0.03] with an adjusted R<sup>2</sup> of 0.069. Although the adjusted R<sup>2</sup> for both equations are incredibly low and show that the immediacy of control is not a strong contributing factor, it is worth mentioning that there is a significant correlation. Respectively, hypothesis 6 is accepted: a higher level of control positively affects students&#x2019; learning performance.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>The plot of post-study grades and the immediacy of control for each condition.</p>
</caption>
<graphic xlink:href="frvir-02-695312-g009.tif"/>
</fig>
<p>To measure students&#x2019; penetrative thinking ability, students were asked to take the Geologic Block Cross-sectioning Test (GBCT) (<xref ref-type="bibr" rid="B41">Ormand et&#x20;al., 2014</xref>). A simple linear regression was calculated to predict the result of the post-study grades based on the GBCT score (penetrative thinking ability). Independent of the condition, a significant regression equation was found [<italic>F</italic> (1,175) &#x3d; 21.87, <italic>p</italic>&#x20;&#x3c; 0.001] with an adjusted R<sup>2</sup> of 0.106. Therefore, hypothesis 7 about learning performance is accepted: students with higher penetrative thinking ability show higher learning performance. This shows that for students who understand the spatial relations between the objects, this penetrative thinking ability enables them to understand the location, direction, and shape of earthquake events around the world better. In terms of students with lower penetrative thinking ability (hypothesis 8), there is a significant difference between the post-study knowledge grades of the first-person condition (<italic>M</italic>&#x20;&#x3d; 7.84, <italic>SD</italic> &#x3d; 2.07) and the drag and scroll condition (<italic>M</italic>&#x20;&#x3d; 6.63, <italic>SD</italic> &#x3d; 2.5) in favor of the first-person condition, [<italic>t</italic> (67) &#x3d; 2.36, <italic>p</italic>&#x20;&#x3d; 0.02]. We can conclude that the first-person condition with the freedom of moving in space and inspecting earthquake locations by moving closer to the objects in a first-person view has a positive effect on students with a low penetrative thinking ability. Therefore, hypothesis 6 is accepted: students with lower penetrative thinking ability perform better in the first-person condition. Interestingly, in the drag and scroll condition, there is a significant difference between the pre-, and post-study grades (Z-scores) of students with a low penetrative thinking ability (mean of the differences &#x3d; 0.45) [<italic>t</italic> (37) &#x3d; 2.11, <italic>p</italic>&#x20;&#x3d; 0.04]. Although students with a low penetrative thinking ability in the drag and scroll condition had lower post-study grades compared to the first-person condition they had a significant improvement from their pre-study grades. This result indicates that even though the drag and scroll condition is not as effective as the first-person condition in terms of the knowledge gained in students with a low penetrative ability, it still is an effective medium and has improved students&#x2019; knowledge gain after being exposed to the VR experience.</p>
</sec>
<sec id="s4-3">
<title>4.3 Qualitative Analysis of the Open-Ended Feedback of The Experience</title>
<p>Two open-ended questions were asked from the participants about their experiences as part of the post-study questionnaire:<list list-type="simple">
<list-item>
<p>
<bold>Q1:</bold> If you could have changed something in the experience what would it have been and&#x20;why?</p>
</list-item>
<list-item>
<p>
<bold>Q2:</bold> If any, did this current method of instruction have advantages over classical methods of teachings used in classrooms?</p>
</list-item>
</list>
</p>
<p>Along with the quantitative analysis, the conducted qualitative analysis provides insights into the experiences of users after going through each condition. The extracted codes, capturing the content of the comments by participants, the percentage of participants talking about a code, and Cohen&#x2019;s Kappa inter-rater reliability coefficient are reported in <xref ref-type="table" rid="T2">Table&#x20;2</xref>. Some of the codes are generally applicable to the experience regardless of the conditions and some are specific to the design choices based on the condition.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Summary of the structured content analysis.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Question</th>
<th align="center">Code</th>
<th align="center">% Participants in the drag and scroll condition</th>
<th align="center">% Participants in the first-person condition</th>
<th align="center">Cohen&#x2019;s kappa</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<bold>Q1</bold>
</td>
<td align="left">Add more comprehensive information</td>
<td align="char" char=".">18.7</td>
<td align="char" char=".">22.2</td>
<td align="char" char=".">0.859</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Accessibility to learning objectives in VR</td>
<td align="char" char=".">12.5</td>
<td align="char" char=".">16</td>
<td align="char" char=".">0.883</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Difficulty navigating in the space</td>
<td align="char" char=".">0.09</td>
<td align="char" char=".">0.09</td>
<td align="char" char=".">0.965</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Requesting more interaction</td>
<td align="char" char=".">0.07</td>
<td align="char" char=".">0.06</td>
<td align="char" char=".">0.9</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Make the experience more visually appealing</td>
<td align="char" char=".">0.10</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.9</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Negative learning experience</td>
<td align="char" char=".">0.06</td>
<td align="char" char=".">0.03</td>
<td align="char" char=".">0.943</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Different movement mechanics using the keyboard</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.08</td>
<td align="char" char=".">0.919</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Difficulty running the app</td>
<td align="char" char=".">0.04</td>
<td align="char" char=".">0.03</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">The experience was too long</td>
<td align="char" char=".">0.03</td>
<td align="char" char=".">0</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Improve navigation with the mousepad</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Preferring drag and scroll over moving in space using the keyboard</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Clear representation of the distance between objects</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Have a zoom function with the real images of locations</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Using a different method for switching between datasets</td>
<td align="char" char=".">0.0</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Prefer HMD over the web application</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Show legend at all times</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Suggest quick jump navigation technique</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
<bold>Q2</bold>
</td>
<td align="left">The experience has advantages over classical teaching methods</td>
<td align="char" char=".">54.16</td>
<td align="char" char=".">40.7</td>
<td align="char" char=".">0.977</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">The 3D representation and interactive features improved understanding of the concepts</td>
<td align="char" char=".">37.5</td>
<td align="char" char=".">38.2</td>
<td align="char" char=".">0.911</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">The experience has no advantage over classical teaching methods</td>
<td align="char" char=".">0.09</td>
<td align="char" char=".">19.7</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Learning at your own pace</td>
<td align="char" char=".">11.04</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.907</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Prefer the classical teaching method</td>
<td align="char" char=".">0.04</td>
<td align="char" char=".">0.07</td>
<td align="char" char=".">0.943</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">The experience lacked active QA with the instructor</td>
<td align="char" char=".">0.06</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">The experience was easier</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">1</td>
</tr>
<tr>
<td align="left">
</td>
<td align="left">Superior to other remote learning approaches</td>
<td align="char" char=".">0</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>For the first question, the most frequent code was requiring more comprehensive information. Examples of this code include requesting an interactive legend (i.e.,&#x20;audio feedback), detailed description of features, and adding more features (i.e.,&#x20;mountains or continent names). In terms of accessibility to learning objectives in VR, before the experience, instructions and learning objectives were given to the students, including the highlighted areas to focus on and questions to have in mind while exploring the datasets. However, many students felt the need to see these learning objectives in the VR experience, being able to turn on and off the highlighted areas, and receiving more educational explanations of various subduction zones in the form of audio or text instead of self-exploration. Some students felt that there is no need for a change in the application whereas others mentioned difficulty in navigating in the space, negative learning experience, and difficulties running the app. In the first-person condition, some suggested different movement mechanics be designed to improve the experience. They suggested that instead of using the mouse as defining the direction of movement, two keys on the keyboard should allow for up and down movement. No one in the first-person condition complained that the experience was too long while three people in the drag and scroll condition complained about the length of the experience. Since the method of interaction in the first-person condition was new and students were not familiar with this method of movement in space, they might have used a considerable amount of time learning how to navigate in space and did not feel the time passing. Whereas the method of interaction in the drag and scroll condition is similar to geoscience software programs that many are familiar&#x20;with.</p>
<p>In response to the second question, almost half of the students found this method of teaching superior to the classical methods of teaching. The advantages counted for this experience included learning at your own pace, being easier, and indicating that the 3D representation and interactive features improved their understanding of the concepts. 11.04% of the students in the drag and scroll condition declared that this method helped them to learn at their own pace, while only 0.02% in the first-person condition felt that way. As mentioned in the analysis of the first question, students in the first-person condition might have used a considerable amount of time learning how to navigate in space and that might have affected their learning pace. On the other hand, 19.7% of the students in the first-person condition found this type of experience to have no advantages over classical methods of teaching while only 0.09% in the drag and scroll condition felt that way. This indicates that although 40.7% of the students in the first-person condition found this method advantageous, 19.7% disagreed. One of the negative feedback about this method of teaching was the absence of active Q&#x26;A with the instructor while learning.</p>
<p>Overall, the insights from the first question show that students enrolled in the physical geology course are not used to memorization tasks, they typically would plot the locations and depths of the earthquakes by directly observing the data. In the exercise we designed, they first observed the data and then recalled the cross-sections based on memorization and memory trace. The second question gives insight that half of the students are open to technology-integrated teaching methods. Perhaps by improving their experience regarding the issues mentioned in the coding of the first question, more students might be open to this method of teaching.</p>
</sec>
</sec>
<sec id="s5">
<title>5 Discussion</title>
<p>This study investigated the impact of bodily engagement on the learning experience and performance in the context of penetrative thinking in a critical 3D task in geosciences education: understanding the cross-section of the depth and geometry of earthquakes with distance from the trench. Since we have used the same platform (web-based desktop VR) for the design of VLEs, this study is not focusing on the effect of different mediums or degrees of embodiment on learning but the impact of interaction techniques on learning experience and performance.</p>
<sec id="s5-1">
<title>5.1 The Effect of Bodily Engagement on Learning Experience</title>
<p>Our quantitative evaluation of the learning experience utilized established self-reported measures. We were anticipating a significant difference in the sense of presence between the two conditions. Although the sense of presence is not significantly different among the two conditions, we found that students with higher Visual Spatial Imagery (VSI) ability experience a higher sense of presence in both conditions. In terms of perceived learning, we found that students are significantly more satisfied with the drag and scroll condition but we could not find any difference in other measures related to perceived learning. Concerning the sense of agency, students reported that the drag and scroll condition is significantly easier to use than the first-person condition. They also found the drag and scroll condition to have a higher level of immediacy of control compared to the first-person condition. This evaluation indicates that students are more comfortable and familiar with the interaction method data manipulation which is dragging, rotating, and zooming in/out of the 3D data. This made us curious to see if declaring the drag and scroll condition as an easier interaction technique would translate into superior knowledge gain as well. The results of the structured content analysis show that almost the same percentage of students in both conditions felt that the 3D representation and the method of interaction have improved their understanding of the subject.</p>
</sec>
<sec id="s5-2">
<title>5.2 The Effect of Bodily Engagement on Learning Performance</title>
<p>Overall, all students gained some knowledge by going through the experience but we aimed to investigate the impact of interaction techniques on knowledge gain for students with low prior knowledge of the field. Our analysis showed that knowledge gain in students with low prior knowledge of the field improved significantly after going through the virtual experience in both conditions. We also found that when students felt more in control, in both conditions, they significantly performed better in terms of knowledge gain. This demonstrates that having control can be a contributing factor in knowledge gain. This shows that some students are more comfortable with moving in the three-dimensional environment and inspect objects based on changing their viewpoint whereas some students are more comfortable with data manipulation. With this result in mind, we looked into the penetrative thinking ability of the students to find out whether it would play a role in knowledge gain in different conditions. In the next section, we discuss our findings regarding students with lower penetrative thinking ability.</p>
</sec>
<sec id="s5-3">
<title>5.3 The Overall Effect of Bodily Engagement on Students With Lower Penetrative Thinking Ability</title>
<p>
<xref ref-type="bibr" rid="B64">Weise et&#x20;al. (2019)</xref> advise that the characteristics of the users should be considered in choosing an interaction technique. They suggest that users&#x2019; abilities can affect the performance and usability of the interaction technique. In this study, We used the Geologic Block Cross-sectioning Test (GBCT) to evaluate students&#x2019; penetrative thinking ability. We assessed whether this ability might affect their performance using either of the interaction techniques. Regardless of the conditions, we observed that the higher the penetrative thinking ability of the students, the higher the knowledge gain was. We hypothesized that students with higher spatial ability would better understand spatial relations of 3D objects and would perform better in either condition. One goal of designing interactive and embodied VLEs in 3D is to help students with lower spatial abilities, to help them visualize data in 3D, and better understand spatial relations between 3D objects. We found that students with a lower penetrative thinking ability benefited more from the interaction of the first-person condition. They had a significantly higher knowledge gain than students with a lower penetrative thinking ability in the drag and scroll condition. This result indicates that students with lower penetrative thinking ability benefit from active movement in space that facilitates adjusting their viewpoints. In other words, manipulating objects and trying to rotate them to get the desired viewpoint might be complex for students with lower penetrative thinking ability than naturally moving in space. Even though students with a lower penetrative thinking ability performed significantly higher in the first-person condition in terms of knowledge gain, students with a low penetrative thinking ability in the drag and scroll condition improved significantly compared to their pre-test Z-Score. This result suggests that even though the drag and scroll condition is not as ideal as the first-person condition for these students in terms of post-study knowledge gain, being exposed to a 3D representation of the data and interacting with the data would improve students&#x2019; penetrative view and result in a higher understanding the locations and depths of earthquakes when they have low penetrative thinking ability.</p>
</sec>
</sec>
<sec id="s6">
<title>6 Conclusion, Limitations, and Future Work</title>
<p>In this article, we explored students&#x2019; penetrative thinking ability to interpret subduction zone plate tectonics from observations of the locations and depths of earthquakes. We argued that embodied learning could promote students&#x2019; learning experience and performance in visual-spatial thinking tasks such as penetrative thinking. To examine the role of bodily engagement as an embodied affordance on students&#x2019; learning experience and performance in an introductory physical geology course, we designed two VLEs based on two different interaction techniques: 1) object manipulation (drag and scroll) and 2) moving the user in space (first-person). Analyses of the data concerning learning experience and performance provided us with insights into students&#x2019; perception of learning and the actual performance. Overall, we argue that both interaction techniques have pros and cons regarding learning experience and performance. The goal of the VLE and the students&#x2019; spatial ability can further define which condition is a more suitable choice for teaching earthquake locations and depths.</p>
<p>One of the limitations of this study is the gender composition consisting of primarily male participants. Although our focus has not been the gender differences in spatial abilities, we are aware that there are conflicting studies regarding the differences in spatial abilities among male and female participants <xref ref-type="bibr" rid="B69">Yuan et&#x20;al. (2019)</xref>. Unfortunately, most studies focusing on spatial abilities compare the performance of male and female participants and would not include non-binary participants. Another limitation is that although our population is from different fields and backgrounds, they have been examined in the context of geosciences. In future studies, we plan to investigate the role of bodily engagement in other courses concerning visual-spatial learning. Furthermore, to measure the effect of bodily engagement on knowledge retention, we had to ask the students to answer the post-study knowledge test in a couple of hours to a day. However, due to time constraints during data collection, we could only delay answering the post-study knowledge test by approximately 15&#xa0;min. We introduced this period between the experience and the post-study knowledge test by placing the post-study knowledge test at the end of the post-study survey. Another limitation of this research pertains to the setup of the experiment. Like most research in this domain, our conclusions are based on a single exposure to the VLE. Using a longitudinal study with multiple exposures, the observed effects of bodily engagement between the used conditions could be either amplified or diminished. Therefore, we will perform a longitudinal study over several weeks to further explore the lasting effect of different interaction techniques in future research.</p>
<p>Due to the COVID-19 pandemic, we could not compare the effects of different mediums (IVR vs web-based desktop VR) on bodily engagement and embodied learning. Therefore, as part of the future work, we are devising methods for sending Oculus Quest headsets to the students for remote VR data collection. We opt to investigate the effect of a higher level of bodily engagement in IVR on learning. Furthermore, although we designed this experiment with the utmost care, we plan to implement improvements for future studies. For instance, for the design of the VLEs, we did not include audio feedback for gaining information on earthquake depth or types of volcanoes. This proved to be a sought-after feature by the students, and as such, will be included in future versions of the tool. Future studies will also aim to understand why students reported the drag and scroll condition to be easier to use. We hypothesize that familiarity with this method of interaction due to prior experiences with geological software might be a key predictor. However, it is also pertinent to investigate whether the use of Quest controllers for object manipulation in an immersive VR while physically walking in the environment is considered easier than object manipulation using a drag and scroll technique (web-based desktop VR). Furthermore, comparing an IVR with web-based desktop VR, we plan to investigate the level of control experienced by the students in each condition to explore how much sense of agency they would experience.</p>
</sec>
</body>
<back>
<sec id="s7">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s8">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by the IRB Program (Office of Research Protection) at Penn State University Study ID: STUDY00008293. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s9">
<title>Author Contributions</title>
<p>MB: First Author, conducted research, designed the VR experiment and the survey, carried out the analysis, and writing PS: designed the VR experiment, carried out the qualitative analysis, and writing JW: research design, quantitative analysis, and writing PLF: advisor on research design and implementation, writing AK: advisor on research design and implementation, writing.</p>
</sec>
<sec id="s10">
<title>Funding</title>
<p>This work was supported through a Penn State Strategic Planning award (proposal &#x0023;1685_TE_Cycle2).</p>
</sec>
<sec sec-type="COI-statement" id="s11">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s12">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="s13">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frvir.2021.695312/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/frvir.2021.695312/full&#x23;supplementary-material</ext-link>
</p>
<supplementary-material xlink:href="Video1.mp4" id="SM1" mimetype="application/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<fn-group>
<fn id="FN1">
<label>1</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://sagroups.ieee.org/icicle/">https://sagroups.ieee.org/icicle/</ext-link>
</p>
</fn>
<fn id="FN2">
<label>2</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://immersivelrn.org/about-us/what-is-ilrn/">https://immersivelrn.org/about-us/what-is-ilrn/</ext-link>
</p>
</fn>
<fn id="FN3">
<label>3</label>
<p>Plate boundaries are the edges of plates created when the lithosphere is broken into multiple pieces (<xref ref-type="bibr" rid="B60">Tarbuck et&#x20;al., 1997</xref>).</p>
</fn>
<fn id="FN4">
<label>4</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://desktop.arcgis.com/en/arcmap/latest/extensions/3d-analyst/choosing-the-3d-display-environment.htm">https://desktop.arcgis.com/en/arcmap/latest/extensions/3d-analyst/choosing-the-3d-display-environment.htm</ext-link>
</p>
</fn>
<fn id="FN5">
<label>5</label>
<p>Please refer to the video of the interaction techniques provided as the supplemental material</p>
</fn>
<fn id="FN6">
<label>6</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://www.esri.com/en-us/arcgis/products/arcgis-pro/resources">https://www.esri.com/en-us/arcgis/products/arcgis-pro/resources</ext-link>
</p>
</fn>
<fn id="FN7">
<label>7</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://github.com/domlysz/BlenderGIS">https://github.com/domlysz/BlenderGIS</ext-link>
</p>
</fn>
<fn id="FN8">
<label>8</label>
<p>
<ext-link ext-link-type="uri" xlink:href="%20https://unity.com">https://unity.com</ext-link>
</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ai-Lim Lee</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>K. W.</given-names>
</name>
<name>
<surname>Fung</surname>
<given-names>C. C.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>How Does Desktop Virtual Reality Enhance Learning Outcomes? a Structural Equation Modeling Approach</article-title>. <source>Comput. Edu.</source> <volume>55</volume>, <fpage>1424</fpage>&#x2013;<lpage>1442</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2010.06.006</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bagher</surname>
<given-names>M. M.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Immersive Vr and Embodied Learning: The Role of Embodied Affordances in the Long-Term Retention of Semantic Knowledge</article-title>,&#x201d; in <conf-name>2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>537</fpage>&#x2013;<lpage>538</lpage>. <pub-id pub-id-type="doi">10.1109/vrw50115.2020.00120</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bagher</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Sajjadi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Carr</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>La Femina</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Klippel</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Fostering Penetrative Thinking in Geosciences through Immersive Experiences: A Case Study in Visualizing Earthquake Locations in 3d</article-title>,&#x201d; in <conf-name>2020 6th International Conference of the Immersive Learning Research Network (iLRN)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>132</fpage>&#x2013;<lpage>139</lpage>. <pub-id pub-id-type="doi">10.23919/ilrn47897.2020.9155123</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bailey</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bailenson</surname>
<given-names>J.&#x20;N.</given-names>
</name>
<name>
<surname>Won</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Flora</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Armel</surname>
<given-names>K. C.</given-names>
</name>
</person-group> (<year>2012</year>). &#x201c;<article-title>Presence and Memory: Immersive Virtual Reality Effects on Cued Recall</article-title>,&#x201d; in <conf-name>Proceedings of the International Society for Presence Research Annual Conference (Citeseer)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>24</fpage>&#x2013;<lpage>26</lpage>. </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barsalou</surname>
<given-names>L. W.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Grounded Cognition</article-title>. <source>Annu. Rev. Psychol.</source> <volume>59</volume>, <fpage>617</fpage>&#x2013;<lpage>645</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.59.103006.093639</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barsalou</surname>
<given-names>L. W.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Perceptual Symbol Systems</article-title>. <source>Behav. Brain Sci.</source> <volume>22</volume>, <fpage>577</fpage>&#x2013;<lpage>660</lpage>. <pub-id pub-id-type="doi">10.1017/s0140525x99002149</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Biocca</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>The Cyborg&#x27;s Dilemma</article-title>. <source>Hum. Factors Inf. Tech.</source> <volume>13</volume>, <fpage>113</fpage>&#x2013;<lpage>144</lpage>. <pub-id pub-id-type="doi">10.1016/s0923-8433(99)80011-2</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bulu</surname>
<given-names>S. T.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Place Presence, Social Presence, Co-presence, and Satisfaction in Virtual Worlds</article-title>. <source>Comput. Edu.</source> <volume>58</volume>, <fpage>154</fpage>&#x2013;<lpage>161</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2011.08.024</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clifton</surname>
<given-names>P. G.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>J.&#x20;S.</given-names>
</name>
<name>
<surname>Yeboah</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Doucette</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chandrasekharan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Nitsche</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). <article-title>Design of Embodied Interfaces for Engaging Spatial Cognition</article-title>. <source>Cogn. Res. Princ Implic.</source> <volume>1</volume>, <fpage>24</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1186/s41235-016-0032-5</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Coffin</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Gahagan</surname>
<given-names>L. M.</given-names>
</name>
<name>
<surname>Lawver</surname>
<given-names>L. A.</given-names>
</name>
</person-group> (<year>1997</year>). <source>Present-day Plate Boundary Digital Data Compilation</source>. <publisher-loc>Austin, TX</publisher-loc>: <publisher-name>Tech. rep., Institute for Geophysics</publisher-name>. </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Community</surname>
<given-names>B. O.</given-names>
</name>
</person-group> (<year>2018</year>). <source>Blender - a 3D Modelling and Rendering Package</source>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Blender Foundation, Stichting Blender Foundation</publisher-name>. </citation>
</ref>
<ref id="B12">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Czerwinski</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Goodell</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sottilare</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Learning Engineering@ Scale</article-title>,&#x201d; in <conf-name>Proceedings of the Seventh ACM Conference on Learning@ Scale</conf-name> (<publisher-name>ACM</publisher-name>), <fpage>221</fpage>&#x2013;<lpage>223</lpage>. <pub-id pub-id-type="doi">10.1145/3386527.3405934</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dalgarno</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Carlson</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Gregory</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tynan</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>An Australian and new zealand Scoping Study on the Use of 3d Immersive Virtual Worlds in Higher Education</article-title>. <source>Australas. J.&#x20;Educ. Tech.</source> <volume>27</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.14742/ajet.978</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dalgarno</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>M. J.&#x20;W.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>What Are the Learning Affordances of 3-d Virtual Environments?</article-title>. <source>Br. J.&#x20;Educ. Tech.</source> <volume>41</volume>, <fpage>10</fpage>&#x2013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-8535.2009.01038.x</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Di Luca</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Seifi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Egan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gonzalez-Franco</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Locomotion Vault: the Extra Mile in Analyzing Vr Locomotion Techniques</article-title>,&#x201d; in <conf-name>Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1145/3411764.3445319</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gonzalez-Franco</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Peck</surname>
<given-names>T. C.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Avatar Embodiment. Towards a Standardized Questionnaire</article-title>. <source>Front. Robot. AI</source> <volume>5</volume>, <fpage>74</fpage>. <pub-id pub-id-type="doi">10.3389/frobt.2018.00074</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hannula</surname>
<given-names>K. A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Do geology Field Courses Improve Penetrative Thinking?</article-title>
<source>J.&#x20;Geosci. Edu.</source> <volume>67</volume>, <fpage>143</fpage>&#x2013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1080/10899995.2018.1548004</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hedges</surname>
<given-names>L. V.</given-names>
</name>
<name>
<surname>Olkin</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>2014</year>). <source>Statistical Methods for Meta-Analysis</source>. <publisher-name>Academic Press</publisher-name>. </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hegarty</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Montello</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Richardson</surname>
<given-names>A. E.</given-names>
</name>
<name>
<surname>Ishikawa</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lovelace</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2006</year>). <article-title>Spatial Abilities at Different Scales: Individual Differences in Aptitude-Test Performance and Spatial-Layout Learning</article-title>. <source>Intelligence</source> <volume>34</volume>, <fpage>151</fpage>&#x2013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1016/j.intell.2005.09.005</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hostetter</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Alibali</surname>
<given-names>M. W.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Visible Embodiment: Gestures as Simulated Action</article-title>. <source>Psychon. Bull. Rev.</source> <volume>15</volume>, <fpage>495</fpage>&#x2013;<lpage>514</lpage>. <pub-id pub-id-type="doi">10.3758/pbr.15.3.495</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Jerald</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2016</year>). <source>The VR Book: Human-Centered Design for Virtual Reality</source>. <publisher-loc>Williston, VT</publisher-loc>: <publisher-name>Morgan &#x26; Claypool</publisher-name>. </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Bartolomea</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kalina</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Platform Is Not Destiny: Embodied Learning Effects Comparing 2d Desktop to 3d Virtual Reality Stem Experiences</article-title>. <source>J.&#x20;Comp. Assist. Learn.</source> <volume>37</volume>, <fpage>1263</fpage>&#x2013;<lpage>1284</lpage>. <pub-id pub-id-type="doi">10.1111/jcal.12567</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Birchfield</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Tolentino</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Koziupa</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Collaborative Embodied Learning in Mixed Reality Motion-Capture Environments: Two Science Studies</article-title>. <source>J.&#x20;Educ. Psychol.</source> <volume>106</volume>, <fpage>86</fpage>&#x2013;<lpage>104</lpage>. <pub-id pub-id-type="doi">10.1037/a0034008</pub-id> </citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M. C.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Immersive Vr and Education: Embodied Design Principles that Include Gesture and Hand Controls</article-title>. <source>Front. Robot. AI</source> <volume>5</volume>, <fpage>1</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.3389/frobt.2018.00081</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Ly</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zavala</surname>
<given-names>R. N.</given-names>
</name>
<name>
<surname>Bartolomeo</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kalina</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Embodied Agentic Stem Education: Effects of 3d Vr Compared to 2d Pc</article-title>,&#x201d; in <conf-name>2020 6th International Conference of the Immersive Learning Research Network (iLRN)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>24</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.23919/ilrn47897.2020.9155155</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Megowan-Romanowicz</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Birchfield</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Savio-Ramos</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Effects of Embodied Learning and Digital Platform on the Retention of Physics Content: Centripetal Force</article-title>. <source>Front. Psychol.</source> <volume>7</volume>, <fpage>1819</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.01819</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kelly</surname>
<given-names>J.&#x20;W.</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>T. P.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Reference Frames during the Acquisition and Development of Spatial Memories</article-title>. <source>Cognition</source> <volume>116</volume>, <fpage>409</fpage>&#x2013;<lpage>420</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2010.06.002</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kelly</surname>
<given-names>J.&#x20;W.</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>T. P.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Spatial Memories of Virtual Environments: How Egocentric Experience, Intrinsic Structure, and Extrinsic Structure Interact</article-title>. <source>Psychon. Bull. Rev.</source> <volume>15</volume>, <fpage>322</fpage>&#x2013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.3758/PBR.15.2.322</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kilteni</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Groten</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>The Sense of Embodiment in Virtual Reality</article-title>. <source>Presence: Teleoperators and Virtual Environments</source> <volume>21</volume>, <fpage>373</fpage>&#x2013;<lpage>387</lpage>. <pub-id pub-id-type="doi">10.1162/pres_a_00124</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klippel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jackson</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>La Femina</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Stubbs</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wetzel</surname>
<given-names>R.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Transforming Earth Science Education through Immersive Experiences: Delivering on a Long Held Promise</article-title>. <source>J.&#x20;Educ. Comput. Res.</source> <volume>57</volume>, <fpage>1745</fpage>&#x2013;<lpage>1771</lpage>. <pub-id pub-id-type="doi">10.1177/0735633119854025</pub-id> </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lages</surname>
<given-names>W. S.</given-names>
</name>
<name>
<surname>Bowman</surname>
<given-names>D. A.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Move the Object or Move Myself? Walking vs. Manipulation for the Examination of 3d Scientific Data</article-title>. <source>Front. ICT</source> <volume>5</volume>, <fpage>15</fpage>. <pub-id pub-id-type="doi">10.3389/fict.2018.00015</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>K. M.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>Presence, Explicated</article-title>. <source>Commun. Theor.</source> <volume>14</volume>, <fpage>27</fpage>&#x2013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-2885.2004.tb00302.x</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Legault</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chi</surname>
<given-names>Y.-A.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Klippel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Immersive Virtual Reality as an Effective Tool for Second Language Vocabulary Learning</article-title>. <source>Languages</source> <volume>4</volume>, <fpage>13</fpage>. <pub-id pub-id-type="doi">10.3390/languages4010013</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindgren</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Johnson-Glenberg</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Emboldened by Embodiment</article-title>. <source>Educ. Res.</source> <volume>42</volume>, <fpage>445</fpage>&#x2013;<lpage>452</lpage>. <pub-id pub-id-type="doi">10.3102/0013189x13511661</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindgren</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tscholl</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Enhancing Learning and Engagement through Embodied Interaction within a Mixed Reality Simulation</article-title>. <source>Comput. Edu.</source> <volume>95</volume>, <fpage>174</fpage>&#x2013;<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2016.01.001</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mathewson</surname>
<given-names>J.&#x20;H.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Visual-spatial Thinking: An Aspect of Science Overlooked by Educators</article-title>. <source>Sci. Ed.</source> <volume>83</volume>, <fpage>33</fpage>&#x2013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1002/(sici)1098-237x(199901)83:1&#x3c;33:aid-sce2&#x3e;3.0.co;2-z</pub-id> </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Merchant</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Goetz</surname>
<given-names>E. T.</given-names>
</name>
<name>
<surname>Cifuentes</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Keeney-Kennicutt</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>T. J.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Effectiveness of Virtual Reality-Based Instruction on Students&#x27; Learning Outcomes in K-12 and Higher Education: A Meta-Analysis</article-title>. <source>Comput. Edu.</source> <volume>70</volume>, <fpage>29</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2013.07.033</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mou</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>T. P.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Intrinsic Frames of Reference in Spatial Memory</article-title>. <source>J.&#x20;Exp. Psychol. Learn. Mem. Cogn.</source> <volume>28</volume>, <fpage>162</fpage>&#x2013;<lpage>170</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.28.1.162</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Newcombe</surname>
<given-names>N. S.</given-names>
</name>
<name>
<surname>Shipley</surname>
<given-names>T. F.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Thinking about Spatial Thinking: New Typology, New Assessments</article-title>,&#x201d; in <source>Studying Visual and Spatial Reasoning for Design Creativity</source> (<publisher-name>Springer</publisher-name>), <fpage>179</fpage>&#x2013;<lpage>192</lpage>. <pub-id pub-id-type="doi">10.1007/978-94-017-9297-4_10</pub-id> </citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nowak</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Biocca</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2003</year>). <article-title>The Effect of the Agency and Anthropomorphism on Users&#x27; Sense of Telepresence, Copresence, and Social Presence in Virtual Environments</article-title>. <source>Presence: Teleoperators &#x26; Virtual Environments</source> <volume>12</volume>, <fpage>481</fpage>&#x2013;<lpage>494</lpage>. <pub-id pub-id-type="doi">10.1162/105474603322761289</pub-id> </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ormand</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Manduca</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Shipley</surname>
<given-names>T. F.</given-names>
</name>
<name>
<surname>Tikoff</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Harwood</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Atit</surname>
<given-names>K.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>Evaluating Geoscience Students&#x27; Spatial Thinking Skills in a Multi-Institutional Classroom Study</article-title>. <source>J.&#x20;Geosci. Edu.</source> <volume>62</volume>, <fpage>146</fpage>&#x2013;<lpage>154</lpage>. <pub-id pub-id-type="doi">10.5408/13-027.1</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Paas</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Sweller</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>An Evolutionary Upgrade of Cognitive Load Theory: Using the Human Motor System and Collaboration to Support the Learning of Complex Cognitive Tasks</article-title>. <source>Educ. Psychol. Rev.</source> <volume>24</volume>, <fpage>27</fpage>&#x2013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1007/s10648-011-9179-2</pub-id> </citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Plummer</surname>
<given-names>J.&#x20;D.</given-names>
</name>
<name>
<surname>Bower</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Liben</surname>
<given-names>L. S.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>The Role of Perspective Taking in How Children Connect Reference Frames when Explaining Astronomical Phenomena</article-title>. <source>Int. J.&#x20;Sci. Edu.</source> <volume>38</volume>, <fpage>345</fpage>&#x2013;<lpage>365</lpage>. <pub-id pub-id-type="doi">10.1080/09500693.2016.1140921</pub-id> </citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Repetto</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Serino</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Macedonia</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Riva</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Virtual Reality as an Embodied Tool to Enhance Episodic Memory in Elderly</article-title>. <source>Front. Psychol.</source> <volume>7</volume>, <fpage>1839</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2016.01839</pub-id> </citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ritzwoller</surname>
<given-names>M. H.</given-names>
</name>
<name>
<surname>Barmin</surname>
<given-names>M. P.</given-names>
</name>
<name>
<surname>Villase&#xf1;or</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Levshin</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Engdahl</surname>
<given-names>E. R.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Pn and Sn Tomography across Eurasia to Improve Regional Seismic Event Locations</article-title>. <source>Tectonophysics</source> <volume>358</volume>, <fpage>39</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1016/s0040-1951(02)00416-x</pub-id> </citation>
</ref>
<ref id="B46">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ruscella</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Obeid</surname>
<given-names>M. F.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>A Taxonomy for Immersive Experience Design</article-title>,&#x201d; in <conf-name>2021 7th International Conference of the Immersive Learning Research Network (iLRN)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.23919/ilrn52045.2021.9459328</pub-id> </citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanchez-Vives</surname>
<given-names>M. V.</given-names>
</name>
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2005</year>). <article-title>From Presence to Consciousness through Virtual Reality</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>6</volume>, <fpage>332</fpage>&#x2013;<lpage>339</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1651</pub-id> </citation>
</ref>
<ref id="B48">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Schreier</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2012</year>). <source>Qualitative Content Analysis in Practice</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Sage publications</publisher-name>. </citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schubert</surname>
<given-names>T. W.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>A New conception of Spatial Presence: Once Again, with Feeling</article-title>. <source>Commun. Theor.</source> <volume>19</volume>, <fpage>161</fpage>&#x2013;<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-2885.2009.01340.x</pub-id> </citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schuemie</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Van Der Straaten</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Krijn</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Van Der Mast</surname>
<given-names>C. A. P. G.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>Research on Presence in Virtual Reality: A Survey</article-title>. <source>CyberPsychology Behav.</source> <volume>4</volume>, <fpage>183</fpage>&#x2013;<lpage>201</lpage>. <pub-id pub-id-type="doi">10.1089/109493101300117884</pub-id> </citation>
</ref>
<ref id="B51">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Shapiro</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2014</year>). <source>The Routledge Handbook of Embodied Cognition</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Routledge</publisher-name>. </citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shapiro</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>The Embodied Cognition Research Programme</article-title>. <source>Philos. Compass</source> <volume>2</volume>, <fpage>338</fpage>&#x2013;<lpage>346</lpage>. <pub-id pub-id-type="doi">10.1111/j.1747-9991.2007.00064.x</pub-id> </citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skulmowski</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rey</surname>
<given-names>G. D.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Embodied Learning: Introducing a Taxonomy Based on Bodily Engagement and Task Integration</article-title>. <source>Cogn. Res.</source> <volume>3</volume>, <fpage>6</fpage>. <pub-id pub-id-type="doi">10.1186/s41235-018-0092-9</pub-id> </citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Place Illusion and Plausibility Can lead to Realistic Behaviour in Immersive Virtual Environments</article-title>. <source>Phil. Trans. R. Soc. B</source> <volume>364</volume>, <fpage>3549</fpage>&#x2013;<lpage>3557</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2009.0138</pub-id> </citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Spanlang</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Corominas</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Simulating Virtual Environments within Virtual Environments as the Basis for a Psychophysics of Presence</article-title>. <source>ACM Trans. Graph.</source> <volume>29</volume>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1145/1778765.1778829</pub-id> </citation>
</ref>
<ref id="B56">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wilbur</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>A Framework for Immersive Virtual Environments (Five): Speculations on the Role of Presence in Virtual Environments</article-title>. <source>Presence: Teleoperators &#x26; Virtual Environments</source> <volume>6</volume>, <fpage>603</fpage>&#x2013;<lpage>616</lpage>. <pub-id pub-id-type="doi">10.1162/pres.1997.6.6.603</pub-id> </citation>
</ref>
<ref id="B57">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smyrnaiou</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Sotiriou</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Georgakopoulou</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Papadopoulou</surname>
<given-names>O.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Connecting Embodied Learning in Educational Practice to the Realisation of Science Educational Scenarios through Performing Arts</article-title>. <source>Inspiring Sci. Edu.</source> <volume>31</volume>, <fpage>31</fpage>&#x2013;<lpage>38</lpage>. </citation>
</ref>
<ref id="B58">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Southgate</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Conceptualising Embodiment through Virtual Reality for Education</article-title>,&#x201d; in <conf-name>2020 6th International Conference of the Immersive Learning Research Network (iLRN)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>38</fpage>&#x2013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.23919/ilrn47897.2020.9155121</pub-id> </citation>
</ref>
<ref id="B59">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stolz</surname>
<given-names>S. A.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Embodied Learning</article-title>. <source>Educ. Philos. Theor.</source> <volume>47</volume>, <fpage>474</fpage>&#x2013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.1080/00131857.2013.879694</pub-id> </citation>
</ref>
<ref id="B60">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tarbuck</surname>
<given-names>E. J.</given-names>
</name>
<name>
<surname>Lutgens</surname>
<given-names>F. K.</given-names>
</name>
<name>
<surname>Tasa</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>1997</year>). <source>Earth Science</source>. <publisher-loc>New Jersey</publisher-loc>: <publisher-name>Prentice-Hall</publisher-name>. </citation>
</ref>
<ref id="B61">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Venzke</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Global Volcanism Program</article-title>. <source>Volcanoes of the World</source> <volume>4</volume>, <fpage>1</fpage>. </citation>
</ref>
<ref id="B62">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Vorderer</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Wirth</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Gouveia</surname>
<given-names>F. R.</given-names>
</name>
<name>
<surname>Biocca</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Saari</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>J&#xe4;ncke</surname>
<given-names>L.</given-names>
</name>
<etal/>
</person-group> (<year>2004</year>). <article-title>Mec Spatial Presence Questionnaire</article-title>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://www.ijk.hmt-hannover.de/presence">http://www.ijk.hmt-hannover.de/presence</ext-link> (Accessed September 18, 2015)</comment>. </citation>
</ref>
<ref id="B63">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Waller</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Greenauer</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>The Role of Body-Based Sensory Information in the Acquisition of Enduring Spatial Representations</article-title>. <source>Psychol. Res.</source> <volume>71</volume>, <fpage>322</fpage>&#x2013;<lpage>332</lpage>. <pub-id pub-id-type="doi">10.1007/s00426-006-0087-x</pub-id> </citation>
</ref>
<ref id="B64">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Weise</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zender</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lucke</surname>
<given-names>U.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>A Comprehensive Classification of 3d Selection and Manipulation Techniques</article-title>,&#x201d; in <conf-name>Proceedings of Mensch und Computer</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>321</fpage>&#x2013;<lpage>332</lpage>. <pub-id pub-id-type="doi">10.1145/3340764.3340777</pub-id> </citation>
</ref>
<ref id="B65">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Six Views of Embodied Cognition</article-title>. <source>Psychon. Bull. Rev.</source> <volume>9</volume>, <fpage>625</fpage>&#x2013;<lpage>636</lpage>. <pub-id pub-id-type="doi">10.3758/BF03196322</pub-id> </citation>
</ref>
<ref id="B66">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wirth</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Hartmann</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>B&#xf6;cking</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vorderer</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Klimmt</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Schramm</surname>
<given-names>H.</given-names>
</name>
<etal/>
</person-group> (<year>2007</year>). <article-title>A Process Model of the Formation of Spatial Presence Experiences</article-title>. <source>Media Psychol.</source> <volume>9</volume>, <fpage>493</fpage>&#x2013;<lpage>525</lpage>. <pub-id pub-id-type="doi">10.1080/15213260701283079</pub-id> </citation>
</ref>
<ref id="B67">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Witmer</surname>
<given-names>B. G.</given-names>
</name>
<name>
<surname>Singer</surname>
<given-names>M. J.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>Measuring Presence in Virtual Environments: A Presence Questionnaire</article-title>. <source>Presence</source> <volume>7</volume>, <fpage>225</fpage>&#x2013;<lpage>240</lpage>. <pub-id pub-id-type="doi">10.1162/105474698565686</pub-id> </citation>
</ref>
<ref id="B68">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Yeonhee</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2018</year>). <source>How Spatial Presence in VR Affects Memory Retention and Motivation on Second Language Learning: A Comparison of Desktop and Immersive VR-Based Learning</source>. <publisher-loc>Syracuse, NY</publisher-loc>: <publisher-name>Syracuse University</publisher-name>. <comment>Ph.D. thesis</comment>. </citation>
</ref>
<ref id="B69">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yuan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Kong</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>You</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Gender Differences in Large-Scale and Small-Scale Spatial Ability: A Systematic Review Based on Behavioral and Neuroimaging Research</article-title>. <source>Front. Behav. Neurosci.</source> <volume>13</volume>, <fpage>128</fpage>. <pub-id pub-id-type="doi">10.3389/fnbeh.2019.00128</pub-id> </citation>
</ref>
<ref id="B70">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zielasko</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Horn</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Freitag</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Weyers</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kuhlen</surname>
<given-names>T. W.</given-names>
</name>
</person-group> (<year>2016</year>). &#x201c;<article-title>Evaluation of Hands-free Hmd-Based Navigation Techniques for Immersive Data Analysis</article-title>,&#x201d; in <conf-name>2016 IEEE Symposium on 3D User Interfaces (3DUI)</conf-name> (<publisher-name>IEEE</publisher-name>), <fpage>113</fpage>&#x2013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.1109/3dui.2016.7460040</pub-id> </citation>
</ref>
<ref id="B71">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zielasko</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Riecke</surname>
<given-names>B. E.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>To Sit or Not to Sit in VR: Analyzing Influences and (Dis)Advantages of Posture and Embodied Interaction</article-title>. <source>Computers</source> <volume>10</volume>, <fpage>73</fpage>. <pub-id pub-id-type="doi">10.3390/computers10060073</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>