<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Virtual Real.</journal-id>
<journal-title>Frontiers in Virtual Reality</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Virtual Real.</abbrev-journal-title>
<issn pub-type="epub">2673-4192</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1379351</article-id>
<article-id pub-id-type="doi">10.3389/frvir.2024.1379351</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Virtual Reality</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Synergy and medial effects of multimodal cueing with auditory and electrostatic force stimuli on visual field guidance in 360&#xb0; VR</article-title>
<alt-title alt-title-type="left-running-head">Sawahata et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frvir.2024.1379351">10.3389/frvir.2024.1379351</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Sawahata</surname>
<given-names>Yasuhito</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/247120/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Harasawa</surname>
<given-names>Masamitsu</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/2749457/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Komine</surname>
<given-names>Kazuteru</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/281795/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
</contrib-group>
<aff>
<institution>Science and Technology Research Laboratories</institution>, <institution>Japan Broadcasting Corporation</institution>, <addr-line>Tokyo</addr-line>, <country>Japan</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/530974/overview">Justyna &#x15a;widrak</ext-link>, August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Spain</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1720262/overview">Alejandro Beacco</ext-link>, University of Barcelona, Spain</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/369014/overview">Pierre Bourdin-Kreitz</ext-link>, Open University of Catalonia, Spain</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Yasuhito Sawahata, <email>sawahata.y-jq@nhk.or.jp</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>06</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>5</volume>
<elocation-id>1379351</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>01</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>05</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2024 Sawahata, Harasawa and Komine.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Sawahata, Harasawa and Komine</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>This study investigates the effects of multimodal cues on visual field guidance in 360&#xb0; virtual reality (VR). Although this technology provides highly immersive visual experiences through spontaneous viewing, this capability can disrupt the quality of experience and cause users to miss important objects or scenes. Multimodal cueing using non-visual stimuli to guide the users&#x2019; heading, or their visual field, has the potential to preserve the spontaneous viewing experience without interfering with the original content. In this study, we present a visual field guidance method that imparts auditory and haptic stimulations using an artificial electrostatic force that can induce a subtle &#x201c;fluffy&#x201d; sensation on the skin. We conducted a visual search experiment in VR, wherein the participants attempted to find visual target stimuli both with and without multimodal cues, to investigate the behavioral characteristics produced by the guidance method. The results showed that the cues aided the participants in locating the target stimuli. However, the performance with simultaneous auditory and electrostatic cues was situated between those obtained when each cue was presented individually (<italic>medial effect</italic>), and no improvement was observed even when multiple cue stimuli pointed to the same target. In addition, a simulation analysis showed that this intermediate performance can be explained by the integrated perception model; that is, it is caused by an imbalanced perceptual uncertainty in each sensory cue for orienting to the correct view direction. The simulation analysis also showed that an improved performance (<italic>synergy effect</italic>) can be observed depending on the balance of the uncertainty, suggesting that a relative amount of uncertainty for each cue determines the performance. These results suggest that electrostatic force can be used to guide 360&#xb0; viewing in VR, and that the performance of visual field guidance can be improved by introducing multimodal cues, the uncertainty of which is modulated to be less than or comparable to that of other cues. Our findings on the conditions that modulate multimodal cueing effects contribute to maximizing the quality of spontaneous 360&#xb0; viewing experiences with multimodal guidance.</p>
</abstract>
<kwd-group>
<kwd>out-of-view problem</kwd>
<kwd>visual field guidance</kwd>
<kwd>electrostatic force</kwd>
<kwd>haptics</kwd>
<kwd>multimodal processing</kwd>
<kwd>integrated perception model</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Haptics</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>The presentation of multimodal sensory information in virtual reality (VR) can considerably enhance the sense of presence and immersion. In daily life, we perceive the surrounding physical world through multiple senses, such as visual, auditory, and haptic senses, and interact with it based on these perceptions (<xref ref-type="bibr" rid="B15">Gibson, 1979</xref>; <xref ref-type="bibr" rid="B12">Flach and Holden, 1998</xref>; <xref ref-type="bibr" rid="B7">Dalgarno and Lee, 2010</xref>). Therefore, introducing multimodal stimulations into VR can enhance realism and significantly improve the experience. In fact, numerous studies have reported the benefits of multimodal VR (<xref ref-type="bibr" rid="B34">Mikropoulos and Natsis, 2011</xref>; <xref ref-type="bibr" rid="B35">Murray et al., 2016</xref>; <xref ref-type="bibr" rid="B58">Wang et al., 2016</xref>; <xref ref-type="bibr" rid="B29">Martin et al., 2022</xref>; <xref ref-type="bibr" rid="B33">Melo et al., 2022</xref>).</p>
<p>Head-mounted displays (HMDs) offer a highly immersive visual experience by spontaneously allowing users to view a 360&#xb0; visual world; however, this feature may disrupt the 360&#xb0; viewing experience, causing users to miss important objects or scenes that are located outside their visual field, thereby resulting in the &#x201c;out-of-view&#x201d; problem (<xref ref-type="bibr" rid="B18">Gruenefeld, El Ali, et al., 2017b</xref>). In 360&#xb0; VR, the visual field of the user is defined by the viewport of the HMD. As nothing is presented outside the viewport, users have no opportunity for perception without changing the head direction. To address this problem, the presentation of arrows (<xref ref-type="bibr" rid="B25">Lin Y.-C et al., 2017</xref>; <xref ref-type="bibr" rid="B47">Schmitz et al., 2020</xref>; <xref ref-type="bibr" rid="B57">Wallgrun et al., 2020</xref>), peripheral flickering (<xref ref-type="bibr" rid="B47">Schmitz et al., 2020</xref>; <xref ref-type="bibr" rid="B57">Wallgrun et al., 2020</xref>), and picture-in-picture previews and thumbnails (<xref ref-type="bibr" rid="B26">Lin Y. T. et al., 2017</xref>; <xref ref-type="bibr" rid="B59">Yamaguchi et al., 2021</xref>) have been employed and shown to guide the gaze and visual attention effectively. However, these approaches also exhibit the problem of inevitably interfering with the video content, potentially disrupting the spontaneous viewing experience and mitigating the benefit of 360&#xb0; video viewing (<xref ref-type="bibr" rid="B48">Sheikh et al., 2016</xref>; <xref ref-type="bibr" rid="B39">Pavel et al., 2017</xref>; <xref ref-type="bibr" rid="B54">Tong et al., 2019</xref>). Addressing this problem will significantly improve the 360&#xb0; video viewing experience, especially for VR content with fixed time events, such as live scenes, movies, and dramas.</p>
<p>Several studies have explored the potential of multimodal stimuli to guide user behavior in 360&#xb0; VR while preserving the original content (<xref ref-type="bibr" rid="B44">Rothe et al., 2019</xref>; <xref ref-type="bibr" rid="B28">Malpica, Serrano, Allue, et al., 2020b</xref>). Diegetic cues based on non-visual sensory stimuli such as directional sound emanating from a VR scene provide natural and intuitive guidance that feels appropriate in VR settings (<xref ref-type="bibr" rid="B37">Nielsen et al., 2016</xref>; <xref ref-type="bibr" rid="B48">Sheikh et al., 2016</xref>; <xref ref-type="bibr" rid="B46">Rothe et al., 2017</xref>; <xref ref-type="bibr" rid="B45">Rothe and Hu&#xdf;mann, 2018</xref>; <xref ref-type="bibr" rid="B54">Tong et al., 2019</xref>), exhibiting good compatibility with immersive 360&#xb0; video viewing. At present, audio output is usually supported by any available HMDs and is the most common cue for visual field guidance. Because visual field guidance using non-visual stimuli is expected to provide high-quality VR experiences (<xref ref-type="bibr" rid="B44">Rothe et al., 2019</xref>), extensive research on various multimodal stimulation methods, including haptic stimulation, can aid the design of better VR experiences.</p>
<p>This study introduces electrostatic force stimuli to guide user behavior in selecting visual images that are displayed on the HMD (<xref ref-type="fig" rid="F1">Figure 1</xref>). Previous studies have shown that applying an electrostatic force to the human body can induce a &#x201c;fluffy&#x201d; haptic sensation (<xref ref-type="bibr" rid="B13">Fukushima and Kajimoto, 2012a</xref>; <xref ref-type="bibr" rid="B14">2012b</xref>; <xref ref-type="bibr" rid="B52">Suzuki et al., 2020</xref>; <xref ref-type="bibr" rid="B24">Karasawa and Kajimoto, 2021</xref>). Unlike some species of fish, amphibians, and mammals, humans do not possess electroreceptive abilities that allow them to perceive electric fields directly (<xref ref-type="bibr" rid="B40">Proske et al., 1998</xref>; <xref ref-type="bibr" rid="B36">Newton et al., 2019</xref>; <xref ref-type="bibr" rid="B22">H&#xfc;ttner et al., 2023</xref>). However, as discussed in <xref ref-type="bibr" rid="B24">Karasawa and Kajimoto (2021)</xref>, the haptic sensations that are produced through electrostatic stimulation are strongly related to the hair on the skin. Therefore, humans can indirectly perceive electrostatic stimulation through cutaneous mechanoreceptors (<xref ref-type="bibr" rid="B21">Horch et al., 1977</xref>; <xref ref-type="bibr" rid="B23">Johnson, 2001</xref>; <xref ref-type="bibr" rid="B60">Zimmerman et al., 2014</xref>), which are primarily stimulated by hair movements owing to electrostatic forces. Perceiving the physical world through cutaneous haptic sensations is a common experience in daily life, such as feeling the movement of air, and is expected to be a candidate method to guide user behavior naturally.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Visual field guidance in 360&#xb0; VR using electrostatic force stimuli to mitigate the out-of-view problem. <bold>(A)</bold> Gentle visual field guidance. A user is viewing the scene depicted in the orange frame, whereas an important situation exists in the scene depicted in the red frame. Guiding the visual field to the proper direction will improve the user experience. <bold>(B)</bold> Haptic stimulus presentation using electrostatic forces. Electrostatic force helps the user to discover the important scene without affecting the original 360&#xb0; VR content.</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g001.tif"/>
</fig>
<p>Many studies have proposed various methods of providing haptic sensations for visual field guidance, such as vibrations (<xref ref-type="bibr" rid="B31">Matsuda et al., 2020</xref>), normal forces on the face (<xref ref-type="bibr" rid="B4">Chang et al., 2018</xref>), and muscle stimulation (<xref ref-type="bibr" rid="B53">Tanaka et al., 2022</xref>), demonstrating that multimodal stimulation can improve the VR experience. Electrostatic force stimulation also provides haptic sensations, but can stimulate a relatively large area of the human body in a &#x201c;fluffy&#x201d; and subtle manner, which differs significantly from stimuli produced by other tactile stimulation methods, such as direct vibration stimulation through actuators. <xref ref-type="bibr" rid="B24">Karasawa and Kajimoto (2021)</xref> showed that electrostatic force stimulation can provide a feeling of <italic>presence</italic>. Previously, <xref ref-type="bibr" rid="B49">Slater (2009)</xref> and <xref ref-type="bibr" rid="B50">Slater et al. (2022)</xref> provided two views of immersive VR experiences, namely, place illusion (PI) and plausibility illusion (Psi), which refer to the sensation of being in a real place and the illusion that the depicted scenario is actually occurring, respectively. In this sense, the effects of haptic stimulation on user experiences in VR belong to Psi. Such fluffy, subtle stimulation of the skin by electrostatic force has the potential to simulate the sensations of airflow, chills, and goosebumps, which are common daily-life experiences. The introduction of such modalities will enhance the plausibility of VR and lead to better VR experiences.</p>
<p>In this study, we presented electrostatic force stimuli using corona discharge, which is a phenomenon wherein ions are continuously emitted from a needle electrode at high voltages, allowing the provision of stimuli from a distance. Specifically, we placed the electrode above the user&#x2019;s head to stimulate a large area, from the head to the body (<xref ref-type="fig" rid="F1">Figure 1B</xref>). Previous studies have employed plate- or pole-shaped electrodes to present such stimuli (<xref ref-type="bibr" rid="B13">Fukushima and Kajimoto, 2012a</xref>; <xref ref-type="bibr" rid="B14">2012b</xref>; <xref ref-type="bibr" rid="B24">Karasawa and Kajimoto, 2021</xref>) and required the user to place their forearm close to the electrodes of the stimulation device, thereby limiting their body movement. The force becomes imperceptible even if the body parts are located 10&#xa0;cm from the electrode (<xref ref-type="bibr" rid="B24">Karasawa and Kajimoto, 2021</xref>). As a typical VR user moves more than this distance, these conventional methods are not suitable for some VR applications that require physical movement. In addition, these devices are too bulky to be worn on the body. The proposed method can potentially overcome this limitation of distance and provide haptic sensations to VR users from a distance, thereby enabling the use of electrostatic force stimulation for visual field guidance in VR.</p>
<p>We evaluated the proposed visual field guidance method using multimodal cues in a psychophysical experiment. Previous studies have systematically evaluated visual field guidance using visual cues (<xref ref-type="bibr" rid="B17">Gruenefeld, Ennenga, et al., 2017a</xref>; <xref ref-type="bibr" rid="B18">Gruenefeld, El Ali, et al., 2017b</xref>; <xref ref-type="bibr" rid="B8">Danieau et al., 2017</xref>; <xref ref-type="bibr" rid="B16">Gruenefeld et al., 2018</xref>; <xref ref-type="bibr" rid="B19">2019</xref>; <xref ref-type="bibr" rid="B20">Harada and Ohyama, 2022</xref>) in VR versions of visual search experiments (<xref ref-type="bibr" rid="B55">Treisman and Gelade, 1980</xref>; <xref ref-type="bibr" rid="B32">McElree and Carrasco, 1999</xref>). This study similarly investigated the effects of multimodal cues on visual searching.</p>
<p>Although numerous studies have shown that multiple modalities in VR can significantly improve the immersive experience (<xref ref-type="bibr" rid="B41">Ranasinghe et al., 2017</xref>; <xref ref-type="bibr" rid="B42">2018</xref>; <xref ref-type="bibr" rid="B6">Cooper et al., 2018</xref>), it is unclear whether visual field guidance can also be improved by introducing multiple non-overt cues. We believe that multiple overt cues, such as visual arrows and halos, would help users to perform search tasks. However, this is not necessarily true for non-overt, subtle, and vague cues. Although guidance through subtle cues can minimize content intrusion (<xref ref-type="bibr" rid="B1">Bailey et al., 2009</xref>; <xref ref-type="bibr" rid="B37">Nielsen et al., 2016</xref>; <xref ref-type="bibr" rid="B48">Sheikh et al., 2016</xref>; <xref ref-type="bibr" rid="B3">Bala et al., 2019</xref>), it is not always guaranteed to be effective (<xref ref-type="bibr" rid="B43">Rothe et al., 2018</xref>). However, employing multiple subtle cues and integrating them into a coherent cue may provide effective overall guidance. In this study, in addition to electrostatic forces, we introduced weak auditory stimuli as subtle environmental cues to investigate the interaction effects of electrostatic and auditory cues on the guidance performance in VR as well as whether they improve, worsen, or have no effect on the guidance performance.</p>
<p>The nature of multimodal perception, which involves the integration of various sensory inputs to produce a coherent perception, has been understood using statistical models, such as maximum likelihood estimation and integration based on Bayes&#x2019; theorem (<xref ref-type="bibr" rid="B11">Ernst and Banks, 2002</xref>; <xref ref-type="bibr" rid="B9">Ernst, 2006</xref>; <xref ref-type="bibr" rid="B10">2007</xref>; <xref ref-type="bibr" rid="B51">Spence, 2011</xref>). Although such computational modeling approaches are also expected to aid in comprehending the underlying mechanisms of multimodal cueing effects on visual field guidance, to the best of our knowledge, this aspect remains unexplored. Therefore, we adopted a similar approach using computational models and investigated the effects of various cueing conditions on visual field guidance. Thus, this study offers a detailed understanding of multimodal visual field guidance and knowledge for predicting user behavior under various cue conditions.</p>
<p>We first introduce electrostatic force and auditory stimuli as multimodal cues in a visual search task and then show that electrostatic force can potentially address the out-of-view problem. Because auditory stimuli have been commonly used in previous studies to guide user behavior (<xref ref-type="bibr" rid="B56">Walker and Lindsay, 2003</xref>; <xref ref-type="bibr" rid="B46">Rothe et al., 2017</xref>; <xref ref-type="bibr" rid="B2">Bala et al., 2018</xref>; <xref ref-type="bibr" rid="B27">Malpica, Serrano, Allue, et al., 2020a</xref>; <xref ref-type="bibr" rid="B28">Malpica, Serrano, Gutierrez, et al., 2020b</xref>; <xref ref-type="bibr" rid="B5">Chao et al., 2020</xref>; <xref ref-type="bibr" rid="B30">Masia et al., 2021</xref>), a baseline is provided for comparisons. In the visual search task, the participants were instructed to find a specific visual target as quickly as possible in 360&#xb0; VR, both with and without sensory cues. We anticipated that the cueing would reduce the cumulative travel angles associated with updating the head direction during the search. Therefore, a comparison of the task performances in each condition revealed the effect of multimodal cueing on the visual field guidance.</p>
<p>In this study, we hypothesized that performance with multimodal cueing in the visual search task in VR would show one of the following three effects: 1) a performance improvement compared to that with electrostatic force or auditory cues (<italic>synergy effect</italic>); 2) the same performance as that with the better cue, not considering the performance worth the other cue (<italic>masking effect</italic>); and 3) performance between the individual performances with each cue (<italic>medial effect</italic>). We conducted a psychophysical experiment to investigate which of these effects were observed with multimodal cues. Subsequently, through the psychophysical experiment and an additional simulation analysis, we demonstrated that both the <italic>synergy</italic> and <italic>medial effects</italic> can be observed depending on the balance of perceptual uncertainties for each cue and the variance in the selection of the head direction. Finally, we investigated the conditions for effective multimodal visual field guidance.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>2 Materials and methods</title>
<sec id="s2-1">
<title>2.1 Visual search experiment with multimodal cues</title>
<p>This subsection describes the experiment that was conducted to investigate the effects of visual field guidance on visual search performance in 360&#xb0; VR using haptic and auditory cues. In addition, the multimodal effects of simultaneous cueing using haptic and auditory stimuli were investigated. The search performance was measured based on the travel angles, which are the cumulative rotation angles of the head direction, as described in detail in <xref ref-type="sec" rid="s2-2-1-1">Section 2.2.1.1</xref>. Finally, we determined which of the effects, namely, <italic>synergy</italic>, <italic>medial</italic>, or <italic>masking</italic>, were likely by comparing the travel angles obtained in each cue condition.</p>
<sec id="s2-1-1">
<title>2.1.1 Participants</title>
<p>Fifteen participants (seven male, eight female; aged 21&#x2013;33&#xa0;years, mean: 24.4) were recruited for this experiment. All participants had normal or corrected-to-normal vision. Two participants were excluded because their psychological thresholds for the electrostatic force stimuli were too high and exceeded the intensity range that our apparatus could present. Informed consent was obtained from all participants, and the study design was approved by the ethics committee of the Science and Technology Research Laboratories, Japan Broadcasting Corporation.</p>
</sec>
<sec id="s2-1-2">
<title>2.1.2 Apparatus</title>
<p>A corona charging gun (GC90N, Green Techno, Japan) was used to present electrostatic force stimuli. This device comprises a needle-shaped electrode (ion-emitting gun) and a high-voltage power supply unit (rated voltage range: 0 to &#x2212;90 [kV]). The electrostatic force intensity was modulated by adjusting the applied voltage. The gun was hung from the ceiling and placed approximately 50&#xa0;cm above the participant&#x2019;s head, as shown in <xref ref-type="fig" rid="F1">Figure 1B</xref>. In addition, the participant wore a wristband attached to the ground to avoid accidental shocks owing to unintentional charging.</p>
<p>A standalone HMD, Meta Quest 2 (Meta, United States), was used to present the 360&#xb0; visual images and auditory stimuli, and the controller joystick (for the right hand) was used to collect the responses. The HMD communicated with the corona charging gun via an Arduino-based microcomputer (M5Stick-C PLUS, M5Stack Technology, China) to control the analog inputs for the gun. The delay between the auditory and electrostatic force stimuli was a maximum of 20&#xa0;ms, which was sufficiently small to perform the task. The participants viewed the 360&#xb0; images while sitting in a swivel chair to facilitate viewing. They wore wired earphones (SE215, SURE, United States), which were connected to the HMD and used to present auditory stimuli using functions provided in Unity (Unity Technologies, United States) throughout the experiment, even when no auditory stimuli were presented. The experimental room was soundproof. Participant safety was duly considered; the floor was covered with an electrically grounded conductive mat, which collected ions that were not meant for the participant, thereby preventing unintentional charging of other objects in the room.</p>
</sec>
<sec id="s2-1-3">
<title>2.1.3 Stimuli</title>
<sec id="s2-1-3-1">
<title>2.1.3.1 Visual stimuli</title>
<p>The target and distractor stimuli were presented in a VR environment implemented in Unity (2021.3.2 f1). The target stimulus included a randomly selected white symbol among &#x201c;&#x251c;&#x201c;, &#x201c;&#x2524;&#x201c;, &#x201c;&#x252c;&#x201c;, and &#x201c;&#x2534;,&#x201d; whereas the distractor stimuli included white &#x201c;&#x253c;&#x201d; symbols. These stimuli were displayed on a gray background and distributed within a range of -10&#xb0;&#x2013;10&#xb0; relative to each intersection of the latitudes and longitudes of a sphere with a 5-m radius that was centered at the origin. The referential latitudes and longitudes were placed at each 36&#xb0; position of the horizontal 360&#xb0; view and 22.5&#xb0; positions between the elevation angles of &#x2212;45&#xb0; and 45&#xb0;. Thus, 1 target and 39 distractor stimuli were presented at 10 &#xd7; 4 locations. The stimuli sizes were randomly selected from visual angles ranging from 2.86&#xb0; &#xb1; 1.43&#xb0;, both horizontally and vertically. The difficulty of the task was modulated by varying the stimulus size and placement and the parameter values were selected based on our preliminary experiments.</p>
</sec>
<sec id="s2-1-3-2">
<title>2.1.3.2 Electrostatic force stimuli</title>
<p>In this study, the electrostatic force stimuli are referred to as haptic stimuli induced by the corona charging gun. The electrostatic force intensity was determined based on the gun voltage. We selected the physical intensity of the electrostatic force for each participant based on their psychological threshold; the intensity ranged from zero to twice the threshold. Thus, we ensured that the stimulus intensity was psychologically equivalent among all participants. The threshold <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which was largely dependent on each participant, was measured before the experiments using the method of staircase, and it typically ranged from &#x2212;10 to &#x2212;30&#xa0;kV. We linearly modulated the stimulus intensity in response to the inner angle <inline-formula id="inf2">
<mml:math id="m2">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> between the head-direction vector <inline-formula id="inf3">
<mml:math id="m3">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and target stimulus vector <inline-formula id="inf4">
<mml:math id="m4">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, as shown in <xref ref-type="fig" rid="F2">Figure 2A</xref>. When the target was in front, i.e., <inline-formula id="inf5">
<mml:math id="m5">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, no electrostatic force was presented, whereas when it was behind, i.e., <inline-formula id="inf6">
<mml:math id="m6">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, the strongest electrostatic force of <inline-formula id="inf7">
<mml:math id="m7">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> was presented. Therefore, the electrostatic force was regarded as a cue stimulus because participants could potentially find the target stimulus by updating their head direction to avoid the subtle haptic sensations. That is, when the haptic sensations were sufficiently weak, the target stimulus was likely to be within the participant&#x2019;s visual field. This is the natural behavior of most people because a strong electrostatic stimulus is typically considered unpleasant.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Stimulus intensity modulation. The stimulus intensity was linearly modulated in response to the inner angle, <inline-formula id="inf8">
<mml:math id="m8">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, between the head direction vector <inline-formula id="inf9">
<mml:math id="m9">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and vector to the target stimulus <inline-formula id="inf10">
<mml:math id="m10">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(A)</bold> Schematic view of <inline-formula id="inf11">
<mml:math id="m11">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf12">
<mml:math id="m12">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf13">
<mml:math id="m13">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which are defined in the <inline-formula id="inf14">
<mml:math id="m14">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> space. <bold>(B)</bold> Linear relationship between stimulus intensity and <inline-formula id="inf15">
<mml:math id="m15">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Both the electrostatic force and auditory cues were modulated.</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g002.tif"/>
</fig>
</sec>
<sec id="s2-1-3-3">
<title>2.1.3.3 Auditory stimuli</title>
<p>Monaural white noise was used as the auditory stimulus. We used the same modulation method for the auditory stimuli as that for the electrostatic force stimuli, as shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. Specifically, we linearly modulated the stimulus intensity in response to the inner angle <inline-formula id="inf16">
<mml:math id="m16">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> between the head-direction vector <inline-formula id="inf17">
<mml:math id="m17">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and target stimulus vector <inline-formula id="inf18">
<mml:math id="m18">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. When the target was in front, i.e., <inline-formula id="inf19">
<mml:math id="m19">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, no sound was presented, whereas when it was behind, i.e., <inline-formula id="inf20">
<mml:math id="m20">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, the maximum amplitude (volume) of the stimulus of <inline-formula id="inf21">
<mml:math id="m21">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> was presented. As with the electrostatic force stimuli, the threshold <inline-formula id="inf22">
<mml:math id="m22">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> for auditory stimuli was measured for each participant before the experiments using the method of staircase.</p>
</sec>
</sec>
<sec id="s2-1-4">
<title>2.1.4 Task and conditions</title>
<p>We designed a within-participant experiment to compare the effects of haptic and auditory guidance in a visual search task. The participants were instructed to find the target stimulus and indicate its direction using the joystick on the VR controller. For example, when they discovered a target stimulus &#x201c;&#x2534;,&#x201d; they tilted the joystick upward as quickly as possible. The trial was terminated once the joystick was manipulated. Feedback was provided between sessions, showing the success rate of the previous session, to encourage participants to complete the task. The task was conducted both with and without sensory cues, resulting in four conditions based on the combinations of cue stimuli: visual only (V), vision with auditory (A), vision with electrostatic force (E), and vision with auditory and electrostatic force (AE) cues.</p>
</sec>
<sec id="s2-1-5">
<title>2.1.5 Procedure</title>
<p>The experiment included 12 sessions comprising 12 visual search trials, for a total of 144 trials per participant. Therefore, each condition (V, A, E, and AE) was presented 36 times in one experiment. In three of the 12 sessions, only condition V was presented, whereas in the other sessions, conditions A, E, and AE were presented in a pseudo-random order. Before each session, we informed the participants whether the next session would be a V-only session or a session with the non-visual-cued conditions. This prevented participants from waiting for non-visual cues during condition V and inadvertently wasting search time.</p>
<p>Each trial comprised a rest period of variable-length (3&#x2013;6&#xa0;s) and a 10-s search period. In the rest period, 40 randomly generated distractors were presented, whereas in the following search period, one of the distractors was replaced with a target stimulus. The trials progressed as soon as the target stimulus was found or when the 10-s time limit was reached. Note that the participants underwent two practice sessions to understand the task and response methods prior to these sessions.</p>
</sec>
</sec>
<sec id="s2-2">
<title>2.2 Analysis</title>
<sec id="s2-2-1">
<title>2.2.1 Behavioral data analysis</title>
<sec id="s2-2-1-1">
<title>2.2.1.1 Modeling</title>
<p>We recorded the participants&#x2019; responses and extents of their head movements during the search period. The trials with a correct response were labeled as successful, whereas those with an incorrect or no response were labeled as failed. The travel angle was defined as the accumulated rotational changes in the head direction during the target search. If guidance by electrostatic forces and auditory cues is effective, the travel angles should be shorter than those with no cues. Therefore, we investigated the modulation efficiency of the target discovery according to cue type.</p>
<p>The travel angle allowed us to model the participants&#x2019; behavior in the visual search experiment with non-overt multimodal cues appropriately. In the original visual search experiment (<xref ref-type="bibr" rid="B55">Treisman and Gelade, 1980</xref>; <xref ref-type="bibr" rid="B32">McElree and Carrasco, 1999</xref>), wherein participants had to find the target stimuli with specified visual features as quickly as possible, the performance was measured by the reaction time required for identification. These experimental paradigms have recently been extended to investigate user behavior in VR. Cue-based visual search experiments in VR involve the analysis of reaction times and/or movement angles towards a target object (<xref ref-type="bibr" rid="B17">Gruenefeld, Ennenga, et al., 2017a</xref>; <xref ref-type="bibr" rid="B18">Gruenefeld, El Ali, et al., 2017b</xref>; <xref ref-type="bibr" rid="B8">Danieau et al., 2017</xref>; <xref ref-type="bibr" rid="B16">Gruenefeld et al., 2018</xref>; <xref ref-type="bibr" rid="B19">2019</xref>; <xref ref-type="bibr" rid="B47">Schmitz et al., 2020</xref>; <xref ref-type="bibr" rid="B20">Harada and Ohyama, 2022</xref>). In addition, previous studies employed overt cues that directly indicated the target location, whereas we employed non-overt cues that weakly indicated them, without interfering with the visuals. This difference could have affected the behavior of participants, depending on their individual traits. For example, some participants may have adopted a scanning strategy wherein they sequentially scanned the surrounding visual world, ignoring the cues because they considered subtle cues to be unreliable. Participants with better physical ability could have completed the task faster using this strategy. In such cases, the reaction time would not accurately reflect the effects of cueing on the visual search performance and the effects would differ significantly from those we were investigating. Because behaviors including scanning that are not based on presented cues would result in larger travel angles, the effects of cues would likely be better reflected in the travel angle than in the reaction time. Therefore, we employed travel angles instead of reaction times to evaluate the performance.</p>
<p>We employed Bayesian modeling to evaluate the efficacy of each cue, as follows:<disp-formula id="e1">
<mml:math id="m23">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>!</mml:mo>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf23">
<mml:math id="m24">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the number of discoveries (successful trials), <inline-formula id="inf24">
<mml:math id="m25">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the expected target discovery rate, and <inline-formula id="inf25">
<mml:math id="m26">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the total travel angle. The probability of <inline-formula id="inf26">
<mml:math id="m27">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> given <inline-formula id="inf27">
<mml:math id="m28">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf28">
<mml:math id="m29">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was calculated using the Poisson process (see 2.2.1.2). Note that <inline-formula id="inf29">
<mml:math id="m30">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, where <inline-formula id="inf30">
<mml:math id="m31">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the number of trials and <inline-formula id="inf31">
<mml:math id="m32">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the travel angle during the <inline-formula id="inf32">
<mml:math id="m33">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>-th trial. By applying the Bayes theorem to Eq. <xref ref-type="disp-formula" rid="e1">1</xref>, the posterior distribution of <inline-formula id="inf33">
<mml:math id="m34">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> can be expressed as <inline-formula id="inf34">
<mml:math id="m35">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. By assuming a noninformative prior on <inline-formula id="inf35">
<mml:math id="m36">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf36">
<mml:math id="m37">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is proportional to the right side of Eq. <xref ref-type="disp-formula" rid="e1">1</xref>. Therefore, the expectation of <inline-formula id="inf37">
<mml:math id="m38">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the target discovery rate, as follows:<disp-formula id="e2">
<mml:math id="m39">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="&#x7c;">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>Thus, <inline-formula id="inf38">
<mml:math id="m40">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was interpreted as the number of discoveries per travel angle.</p>
</sec>
<sec id="s2-2-1-2">
<title>2.2.1.2 Poisson process model derivation</title>
<p>The total travel angle <inline-formula id="inf39">
<mml:math id="m41">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was divided into <inline-formula id="inf40">
<mml:math id="m42">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> bins of width <inline-formula id="inf41">
<mml:math id="m43">
<mml:mrow>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. The probability of finding a target stimulus in a bin with the expected <inline-formula id="inf42">
<mml:math id="m44">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is <inline-formula id="inf43">
<mml:math id="m45">
<mml:mrow>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Therefore, the probability of finding targets in <inline-formula id="inf44">
<mml:math id="m46">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> from <inline-formula id="inf45">
<mml:math id="m47">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> bins is represented by the following binomial distribution:<disp-formula id="e3">
<mml:math id="m48">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>!</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>!</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>!</mml:mo>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
</p>
<p>By minimizing the bin width using <inline-formula id="inf46">
<mml:math id="m49">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, we obtain <inline-formula id="inf47">
<mml:math id="m50">
<mml:mrow>
<mml:msub>
<mml:mi>lim</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2061;</mml:mo>
<mml:msub>
<mml:mi>lim</mml:mi>
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mi>&#x3f5;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> using the following relationships: <inline-formula id="inf48">
<mml:math id="m51">
<mml:mrow>
<mml:mi>&#x3f5;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf49">
<mml:math id="m52">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>&#x2248;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>/</mml:mo>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. In addition, a relationship exists between <inline-formula id="inf50">
<mml:math id="m53">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>!</mml:mo>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>!</mml:mo>
<mml:mo>&#x2248;</mml:mo>
<mml:msup>
<mml:mi>N</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>/</mml:mo>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf51">
<mml:math id="m54">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. Finally, we obtain the following Poisson process:<disp-formula id="e4">
<mml:math id="m55">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>!</mml:mo>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mfrac>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mrow>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x394;</mml:mo>
<mml:mi>&#x3d5;</mml:mi>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>!</mml:mo>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
</sec>
<sec id="s2-2-1-3">
<title>2.2.1.3 Statistics</title>
<p>We created a dataset by pooling all observations that were obtained from the participants. Thereafter, we obtained the posterior distributions of the target discovery rate, <inline-formula id="inf52">
<mml:math id="m56">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
<mml:mo>&#x7c;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, for each condition. Subsequently, the significance of the visual field guidance was assessed by comparing the distribution shapes. For example, when <inline-formula id="inf53">
<mml:math id="m57">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was larger for condition E than that for condition V and their distributions overlapped slightly, we concluded that the electrostatic force-based guidance significantly affected the visual field guidance. The overlap was quantified by the area under the curve (AUC) metric, the value of which ranged from 0 to 1; a smaller overlap resulted in an AUC value closer to 1. We compared the posterior distribution of <inline-formula id="inf54">
<mml:math id="m58">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> in condition AE with those in conditions A and E to identify the multimodal effect.</p>
</sec>
</sec>
</sec>
<sec id="s2-3">
<title>2.3 Simulation analysis</title>
<sec id="s2-3-1">
<title>2.3.1 Overview</title>
<p>To better comprehend how participants processed the multimodal inputs in the experiment, we conducted a simulation analysis assuming a perceptual model wherein a participant determined the head direction by simply averaging two vectors directed towards the target induced through auditory and haptic sensations, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, constituting the most typical explanation of the multimodal effect (<xref ref-type="bibr" rid="B11">Ernst and Banks, 2002</xref>; <xref ref-type="bibr" rid="B9">Ernst, 2006</xref>; <xref ref-type="bibr" rid="B10">2007</xref>). We manipulated the noise levels <inline-formula id="inf55">
<mml:math id="m59">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf56">
<mml:math id="m60">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf57">
<mml:math id="m61">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, assumed for the auditory sensations, haptic sensations, and orienting head directions, respectively, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. Thereafter, we examined the relationship between the noise levels and target discovery rates for each stimulus condition.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Perceptual model of visual search with multimodal cues. The possible head directions were estimated separately based on the synthesized auditory and electrostatic force sensations generated by <inline-formula id="inf58">
<mml:math id="m62">
<mml:mrow>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf59">
<mml:math id="m63">
<mml:mrow>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. The final head direction in each iteration was determined by averaging the estimated directions.</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g003.tif"/>
</fig>
<p>We implemented a computational model to determine the target stimulus direction based on the synthesized sensations. The head direction vector at time <inline-formula id="inf60">
<mml:math id="m64">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is represented as <inline-formula id="inf61">
<mml:math id="m65">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> (<inline-formula id="inf62">
<mml:math id="m66">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>), and the auditory and electrostatic force sensory inputs for the model are denoted as <inline-formula id="inf63">
<mml:math id="m67">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf64">
<mml:math id="m68">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. The model estimated the next head direction <inline-formula id="inf65">
<mml:math id="m69">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> such that the sensory inputs were reduced. By iterating these procedures, <inline-formula id="inf66">
<mml:math id="m70">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and/or <inline-formula id="inf67">
<mml:math id="m71">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> were minimized and the target stimulus in the direction of <inline-formula id="inf68">
<mml:math id="m72">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> could be identified. The detailed procedure is presented in <xref ref-type="sec" rid="s2-3-2">Section 2.3.2</xref>.</p>
<p>The simulation was initially conducted using randomly generated <inline-formula id="inf69">
<mml:math id="m73">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf70">
<mml:math id="m74">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> values. The search was iterated according to the synthesized sensations <inline-formula id="inf71">
<mml:math id="m75">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf72">
<mml:math id="m76">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, using different noise levels <inline-formula id="inf73">
<mml:math id="m77">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf74">
<mml:math id="m78">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. To simulate multimodal processing, the model estimated <inline-formula id="inf75">
<mml:math id="m79">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> by averaging the <inline-formula id="inf76">
<mml:math id="m80">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf77">
<mml:math id="m81">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> estimations. The term <inline-formula id="inf78">
<mml:math id="m82">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi mathvariant="double-struck">R</mml:mi>
<mml:mn>3</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> was introduced to represent orienting errors between the estimated and actual directions owing to the physical constraints and other factors during the real experiment. Note that in the unimodal conditions, <inline-formula id="inf79">
<mml:math id="m83">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> or <inline-formula id="inf80">
<mml:math id="m84">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. An inner angle of &#x3c; <inline-formula id="inf81">
<mml:math id="m85">
<mml:mrow>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>30</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> &#xb0; between <inline-formula id="inf82">
<mml:math id="m86">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf83">
<mml:math id="m87">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> indicated that the target stimulus was found and the iteration was terminated.</p>
<p>We ran the simulation using the parameter settings that were closest to those used in the real experiment; for example, the maximum amount and speed of head rotation and the number of trials were appropriately selected. The simulation was performed 468 times for each condition, corresponding to the setup in the real experiment (36 trials &#xd7; 13 participants). The travel angle and target discovery rate were computed using the methods described in <xref ref-type="sec" rid="s2-2-1-1">Section 2.2.1.1</xref>. To examine the effects of <inline-formula id="inf84">
<mml:math id="m88">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf85">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf86">
<mml:math id="m90">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> on <inline-formula id="inf87">
<mml:math id="m91">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for conditions A, E, and AE, we generated them using the following parameters: <inline-formula id="inf88">
<mml:math id="m92">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf89">
<mml:math id="m93">
<mml:mrow>
<mml:msup>
<mml:mn>0.05</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x3c;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3c;</mml:mo>
<mml:msup>
<mml:mn>0.50</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf90">
<mml:math id="m94">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf91">
<mml:math id="m95">
<mml:mrow>
<mml:msup>
<mml:mn>0.05</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x3c;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3c;</mml:mo>
<mml:msup>
<mml:mn>0.50</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf92">
<mml:math id="m96">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mi mathvariant="normal">I</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf93">
<mml:math id="m97">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.01</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf94">
<mml:math id="m98">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:msup>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</sec>
<sec id="s2-3-2">
<title>2.3.2 Procedure</title>
<p>In this section, we describe the details of the simulation, as summarized in <xref ref-type="sec" rid="s2-3-1">Section 2.3.1</xref> and <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
<p>The simulation model iteratively updated the head direction vector <inline-formula id="inf95">
<mml:math id="m99">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> based on synthetic sensory inputs. The next head direction <inline-formula id="inf96">
<mml:math id="m100">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> was determined using two steps: first, the possible head directions that minimized the target vector (<inline-formula id="inf97">
<mml:math id="m101">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>) error were estimated independently for each modality, and thereafter, <inline-formula id="inf98">
<mml:math id="m102">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> was obtained by averaging the estimated directions. In reality, because <inline-formula id="inf99">
<mml:math id="m103">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> was unknown, it was substituted with its estimate, which was obtained using an auditory or electrostatic force sensation, i.e., <inline-formula id="inf100">
<mml:math id="m104">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> or <inline-formula id="inf101">
<mml:math id="m105">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. Thus, the model determined the next head direction using a gradient descent search, as follows:<disp-formula id="e5">
<mml:math id="m106">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mfenced open="" close="|" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x2207;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>
<disp-formula id="e6">
<mml:math id="m107">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mfenced open="" close="|" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x2207;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>where <inline-formula id="inf102">
<mml:math id="m108">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a step-size parameter. The value of <inline-formula id="inf103">
<mml:math id="m109">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> corresponds to the head rotation speed during the experiment. Thus, Eqs <xref ref-type="disp-formula" rid="e5">5</xref>, <xref ref-type="disp-formula" rid="e6">6</xref> can be rewritten as:<disp-formula id="e7">
<mml:math id="m110">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>
<disp-formula id="e8">
<mml:math id="m111">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>&#x3b1;</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
</p>
<p>Finally, the next head direction vector was obtained as follows:<disp-formula id="e9">
<mml:math id="m112">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where <inline-formula id="inf104">
<mml:math id="m113">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> follows <inline-formula id="inf105">
<mml:math id="m114">
<mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mi mathvariant="normal">I</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and represents the fluctuations associated with the head motion. Note that Eq. <xref ref-type="disp-formula" rid="e9">9</xref> is normalized before the next iteration. In unimodal simulations, Eq. <xref ref-type="disp-formula" rid="e9">9</xref> can be substituted with <inline-formula id="inf106">
<mml:math id="m115">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> or <inline-formula id="inf107">
<mml:math id="m116">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>
<inline-formula id="inf108">
<mml:math id="m117">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> was estimated using past auditory and somatosensory observations <inline-formula id="inf109">
<mml:math id="m118">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf110">
<mml:math id="m119">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>.</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, respectively, where <inline-formula id="inf111">
<mml:math id="m120">
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the number of observations used for the estimation. Because the stimulus intensities are given by the inner angle between the head and target directions, the simulated sensory inputs <inline-formula id="inf112">
<mml:math id="m121">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf113">
<mml:math id="m122">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> can be expressed as<disp-formula id="e10">
<mml:math id="m123">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2061;</mml:mo>
<mml:msup>
<mml:mi>cos</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x22c5;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>
<disp-formula id="e11">
<mml:math id="m124">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2061;</mml:mo>
<mml:msup>
<mml:mi>cos</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x22c5;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>where <inline-formula id="inf114">
<mml:math id="m125">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf115">
<mml:math id="m126">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the noise terms that follow normal distributions with <inline-formula id="inf116">
<mml:math id="m127">
<mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf117">
<mml:math id="m128">
<mml:mrow>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, respectively, and <inline-formula id="inf118">
<mml:math id="m129">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf119">
<mml:math id="m130">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> indicate the amount of noise generated.</p>
<p>We define a head-direction matrix <inline-formula id="inf120">
<mml:math id="m131">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="&#x7c;">
<mml:mrow>
<mml:mtable columnalign="center">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and observation vectors <inline-formula id="inf121">
<mml:math id="m132">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="&#x7c;">
<mml:mrow>
<mml:mtable columnalign="center">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf122">
<mml:math id="m133">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="&#x7c;">
<mml:mrow>
<mml:mtable columnalign="center">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mo>&#x22ef;</mml:mo>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. If we assume that <inline-formula id="inf123">
<mml:math id="m134">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf124">
<mml:math id="m135">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are negatively correlated with <inline-formula id="inf125">
<mml:math id="m136">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, we can estimate <inline-formula id="inf126">
<mml:math id="m137">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> such that <inline-formula id="inf127">
<mml:math id="m138">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf128">
<mml:math id="m139">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are minimized under the constraint of <inline-formula id="inf129">
<mml:math id="m140">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, assuming that <inline-formula id="inf130">
<mml:math id="m141">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf131">
<mml:math id="m142">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf132">
<mml:math id="m143">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are centered in advance. Letting <inline-formula id="inf133">
<mml:math id="m144">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf134">
<mml:math id="m145">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> be the target vectors obtained through auditory and haptic signals, the estimation is then tractable using the method of the Lagrange multiplier method, as follows:<disp-formula id="e12">
<mml:math id="m146">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>
<disp-formula id="e13">
<mml:math id="m147">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>where <inline-formula id="inf135">
<mml:math id="m148">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf136">
<mml:math id="m149">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the Lagrangian functions for each modality, and <inline-formula id="inf137">
<mml:math id="m150">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf138">
<mml:math id="m151">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the Lagrange multipliers. By considering <inline-formula id="inf139">
<mml:math id="m152">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf140">
<mml:math id="m153">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> as the estimators of <inline-formula id="inf141">
<mml:math id="m154">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf142">
<mml:math id="m155">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, respectively, in Eqs <xref ref-type="disp-formula" rid="e12">12</xref>, <xref ref-type="disp-formula" rid="e13">13</xref>, we obtain their values by considering <inline-formula id="inf143">
<mml:math id="m156">
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn mathvariant="bold">0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf144">
<mml:math id="m157">
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn mathvariant="bold">0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> with <inline-formula id="inf145">
<mml:math id="m158">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf146">
<mml:math id="m159">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="&#x2016;" close="&#x2016;" separators="&#x7c;">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, respectively:<disp-formula id="e14">
<mml:math id="m160">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>
<disp-formula id="e15">
<mml:math id="m161">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mi mathvariant="normal">V</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi mathvariant="normal">t</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi mathvariant="bold">s</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>Therefore, by substituting <inline-formula id="inf147">
<mml:math id="m162">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf148">
<mml:math id="m163">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> in Eqs <xref ref-type="disp-formula" rid="e7">7</xref>, <xref ref-type="disp-formula" rid="e8">8</xref> with Eqs <xref ref-type="disp-formula" rid="e14">14</xref>, <xref ref-type="disp-formula" rid="e15">15</xref>, the next head direction vectors could be estimated.</p>
<p>Initially, <inline-formula id="inf149">
<mml:math id="m164">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf150">
<mml:math id="m165">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> were randomly selected from the 360&#xb0; omnidirectional candidates. In addition, <inline-formula id="inf151">
<mml:math id="m166">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf152">
<mml:math id="m167">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> were also generated around <inline-formula id="inf153">
<mml:math id="m168">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Based on the sampled <inline-formula id="inf154">
<mml:math id="m169">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf155">
<mml:math id="m170">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, synthetic sensations were generated using Eqs <xref ref-type="disp-formula" rid="e9">9</xref>&#x2013;<xref ref-type="disp-formula" rid="e11">11</xref>.</p>
<p>The target search was conducted using a maximum of 1000 steps. The simulation parameter values of <inline-formula id="inf156">
<mml:math id="m171">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf157">
<mml:math id="m172">
<mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.01</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> were selected to ensure that the target discovery rates were similar to those observed in the real experiments.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3 Results</title>
<sec id="s3-1">
<title>3.1 Behavioral results</title>
<p>We pooled all data obtained from the 13 participants. The number of successful trials <inline-formula id="inf158">
<mml:math id="m173">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and accumulated travel distances <inline-formula id="inf159">
<mml:math id="m174">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for each condition were <inline-formula id="inf160">
<mml:math id="m175">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>352</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf161">
<mml:math id="m176">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>3.12</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mn>10</mml:mn>
<mml:mn>3</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> for V; <inline-formula id="inf162">
<mml:math id="m177">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>422</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf163">
<mml:math id="m178">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.44</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mn>10</mml:mn>
<mml:mn>3</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> for A; <inline-formula id="inf164">
<mml:math id="m179">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>391</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf165">
<mml:math id="m180">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.80</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mn>10</mml:mn>
<mml:mn>3</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> for E; and <inline-formula id="inf166">
<mml:math id="m181">
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>417</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf167">
<mml:math id="m182">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2.66</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mn>10</mml:mn>
<mml:mn>3</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> for AE. Note that the numbers of trials with response errors, which appeared to be owing to manipulation errors, were: 6, 7, 10, and 4 for the V, A, E, and AE, respectively. <xref ref-type="fig" rid="F4">Figure 4</xref> shows histograms of the successful and failed trials plotted against the travel angles, wherein the blue and red plots denote successful and failed trials, respectively. The target stimuli were identified in all conditions even if the travel angles were short or close to zero because they could appear in the participant&#x2019;s visual field at the beginning of the trial, as the target locations were determined randomly. The failed trials featured longer travel angles, suggesting that the participants looked around but were unable to complete the task within the time limit.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Relationships between the number of discoveries and travel angles. The data of 13 participants were pooled. The panels, from left to right, shows the relationships for each condition: vision only (V), vision &#x2b; auditory cue (A), vision &#x2b; electrostatic force cue (E), and vision &#x2b; auditory and electrostatic force cues (AE).</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g004.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F5">Figure 5</xref> shows the posterior distributions of <inline-formula id="inf168">
<mml:math id="m183">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for each condition. The expected discovery rates <inline-formula id="inf169">
<mml:math id="m184">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf170">
<mml:math id="m185">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf171">
<mml:math id="m186">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>E</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf172">
<mml:math id="m187">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> for each condition were 0.113, 0.173, 0.140, and 0.157, respectively. As predicted, guidance with the auditory cues significantly improved the target discovery rate compared to condition V, and no overlap was observed between the distributions. In addition, the target discovery rate improved significantly in condition E compared with condition V, although not as much as that in condition A. The AUC between the distributions of conditions V and E was 0.998, suggesting that <inline-formula id="inf173">
<mml:math id="m188">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for condition E was significantly higher than that for condition V. In addition, there was no overlap between the distributions of condition V and the other conditions, A and AE, indicating that the AUCs were 1. This indicates that visual field guidance using electrostatic force is effective even in a VR environment wherein users view a 360&#xb0; world using both head and body movements.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>
<inline-formula id="inf174">
<mml:math id="m189">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> variations for each condition. A larger <inline-formula id="inf175">
<mml:math id="m190">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> indicates better performance. Evidently, compared with condition V, <inline-formula id="inf176">
<mml:math id="m191">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> was improved more under conditions E and A. Each plot was drawn based on Eq. <xref ref-type="disp-formula" rid="e1">1</xref>, with the number of successful trials <inline-formula id="inf177">
<mml:math id="m192">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, and the accumulated travel distances <inline-formula id="inf178">
<mml:math id="m193">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> observed for each condition.</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g005.tif"/>
</fig>
<p>We observed that the performance in condition AE was situated between those in conditions A and E, thereby rejecting the possibilities of <italic>synergy</italic> and <italic>masking effects</italic> because the search using both cues did not enhance the performance and the participants could not ignore the other cue. This result supports the <italic>medial effect</italic>, which was one of the anticipated candidates.</p>
</sec>
<sec id="s3-2">
<title>3.2 Simulation results</title>
<p>
<xref ref-type="fig" rid="F6">Figure 6</xref> shows <inline-formula id="inf179">
<mml:math id="m194">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for each cue condition, plotted against the uncertainty ratio for the AE cues for varying electrostatic force uncertainty under constant auditory cue uncertainty. <inline-formula id="inf180">
<mml:math id="m195">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf181">
<mml:math id="m196">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf182">
<mml:math id="m197">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> represent the perceptual uncertainty, i.e., the variances for <inline-formula id="inf183">
<mml:math id="m198">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf184">
<mml:math id="m199">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3f5;</mml:mi>
<mml:mi>e</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf185">
<mml:math id="m200">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, respectively, as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. Summaries of the parameters and observations shown in <xref ref-type="fig" rid="F6">Figures 6A, D</xref> are presented in <xref ref-type="table" rid="T1">Tables 1</xref>, <xref ref-type="table" rid="T2">2</xref>, respectively.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Comparisons of <inline-formula id="inf186">
<mml:math id="m201">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> in the simulation analysis. <bold>(A&#x2013;C)</bold> and <bold>(D&#x2013;F)</bold> show the effects of <inline-formula id="inf187">
<mml:math id="m202">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.01</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf188">
<mml:math id="m203">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:msup>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> on <inline-formula id="inf189">
<mml:math id="m204">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. The expected <inline-formula id="inf190">
<mml:math id="m205">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> values under each condition are plotted against the <inline-formula id="inf191">
<mml:math id="m206">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> ratio using a representative value of <inline-formula id="inf192">
<mml:math id="m207">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.17</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. As no electrostatic-force stimuli were presented in condition A, <inline-formula id="inf193">
<mml:math id="m208">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> could not be technically plotted against <inline-formula id="inf194">
<mml:math id="m209">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>. However, for reference, as <inline-formula id="inf195">
<mml:math id="m210">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for condition A was independent of <inline-formula id="inf196">
<mml:math id="m211">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, we plotted the expected <inline-formula id="inf197">
<mml:math id="m212">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> values for condition A as straight lines through <inline-formula id="inf198">
<mml:math id="m213">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> using a constant <inline-formula id="inf199">
<mml:math id="m214">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> value. The shaded areas behind the plots denote 95% credible intervals of the posterior distribution of <inline-formula id="inf200">
<mml:math id="m215">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(B,E)</bold>, and <bold>(C,F)</bold> show the posterior distributions of <inline-formula id="inf201">
<mml:math id="m216">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> for <inline-formula id="inf202">
<mml:math id="m217">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf203">
<mml:math id="m218">
<mml:mrow>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. The original values presented in <bold>(A&#x2013;C)</bold> and <bold>(D&#x2013;F)</bold> are shown in <xref ref-type="table" rid="T1">Tables 1</xref>, <xref ref-type="table" rid="T2">2</xref>, respectively.</p>
</caption>
<graphic xlink:href="frvir-05-1379351-g006.tif"/>
</fig>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Summary of parameters and observations in simulation analysis (<inline-formula id="inf204">
<mml:math id="m219">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.01</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="center">
<inline-formula id="inf205">
<mml:math id="m220">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th rowspan="2" align="center">
<inline-formula id="inf206">
<mml:math id="m221">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th colspan="3" align="center">E</th>
<th colspan="3" align="center">AE</th>
</tr>
<tr>
<th align="center">
<inline-formula id="inf207">
<mml:math id="m222">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf208">
<mml:math id="m223">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf209">
<mml:math id="m224">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf210">
<mml:math id="m225">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf211">
<mml:math id="m226">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf212">
<mml:math id="m227">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="right">0.05<sup>2</sup>
</td>
<td align="right">0.09</td>
<td align="right">468</td>
<td align="right">952.9</td>
<td align="right">0.491</td>
<td align="right">468</td>
<td align="right">921.3</td>
<td align="right">0.508</td>
</tr>
<tr>
<td align="right">0.08<sup>2</sup>
</td>
<td align="right">0.22</td>
<td align="right">468</td>
<td align="right">1105.4</td>
<td align="right">0.423</td>
<td align="right">468</td>
<td align="right">1133.7</td>
<td align="right">0.413</td>
</tr>
<tr>
<td align="right">0.11<sup>2</sup>
</td>
<td align="right">0.42</td>
<td align="right">468</td>
<td align="right">1583.2</td>
<td align="right">0.296</td>
<td align="right">468</td>
<td align="right">1362.7</td>
<td align="right">0.343</td>
</tr>
<tr>
<td align="right">0.14<sup>2</sup>
</td>
<td align="right">0.68</td>
<td align="right">468</td>
<td align="right">2176.8</td>
<td align="right">0.215</td>
<td align="right">468</td>
<td align="right">1741.6</td>
<td align="right">0.269</td>
</tr>
<tr>
<td align="right">0.17<sup>2</sup>
</td>
<td align="right">1.00</td>
<td align="right">468</td>
<td align="right">2651.1</td>
<td align="right">0.177</td>
<td align="right">468</td>
<td align="right">1897.4</td>
<td align="right">0.247</td>
</tr>
<tr>
<td align="right">0.20<sup>2</sup>
</td>
<td align="right">1.38</td>
<td align="right">468</td>
<td align="right">3106.3</td>
<td align="right">0.151</td>
<td align="right">468</td>
<td align="right">2093.9</td>
<td align="right">0.224</td>
</tr>
<tr>
<td align="right">0.23<sup>2</sup>
</td>
<td align="right">1.83</td>
<td align="right">467</td>
<td align="right">3761.7</td>
<td align="right">0.124</td>
<td align="right">468</td>
<td align="right">2339.1</td>
<td align="right">0.200</td>
</tr>
<tr>
<td align="right">0.26<sup>2</sup>
</td>
<td align="right">2.34</td>
<td align="right">466</td>
<td align="right">4249.6</td>
<td align="right">0.110</td>
<td align="right">468</td>
<td align="right">2577.8</td>
<td align="right">0.182</td>
</tr>
<tr>
<td align="right">0.29<sup>2</sup>
</td>
<td align="right">2.91</td>
<td align="right">464</td>
<td align="right">4691.1</td>
<td align="right">0.099</td>
<td align="right">468</td>
<td align="right">2616.1</td>
<td align="right">0.179</td>
</tr>
<tr>
<td align="right">0.32<sup>2</sup>
</td>
<td align="right">3.54</td>
<td align="right">461</td>
<td align="right">5150.7</td>
<td align="right">0.090</td>
<td align="right">468</td>
<td align="right">2837.3</td>
<td align="right">0.165</td>
</tr>
<tr>
<td align="right">0.35<sup>2</sup>
</td>
<td align="right">4.24</td>
<td align="right">460</td>
<td align="right">5707.8</td>
<td align="right">0.081</td>
<td align="right">467</td>
<td align="right">2818.1</td>
<td align="right">0.166</td>
</tr>
<tr>
<td align="right">0.38<sup>2</sup>
</td>
<td align="right">5.00</td>
<td align="right">457</td>
<td align="right">5920.3</td>
<td align="right">0.077</td>
<td align="right">468</td>
<td align="right">3082.3</td>
<td align="right">0.152</td>
</tr>
<tr>
<td align="right">0.41<sup>2</sup>
</td>
<td align="right">5.82</td>
<td align="right">446</td>
<td align="right">6698.8</td>
<td align="right">0.067</td>
<td align="right">467</td>
<td align="right">3045.0</td>
<td align="right">0.154</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Summary of parameters and observations in simulation analysis (<inline-formula id="inf213">
<mml:math id="m228">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:msup>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="2" align="center">
<inline-formula id="inf214">
<mml:math id="m229">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th rowspan="2" align="center">
<inline-formula id="inf215">
<mml:math id="m230">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th colspan="3" align="center">E</th>
<th colspan="3" align="center">AE</th>
</tr>
<tr>
<th align="center">
<inline-formula id="inf216">
<mml:math id="m231">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf217">
<mml:math id="m232">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf218">
<mml:math id="m233">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf219">
<mml:math id="m234">
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf220">
<mml:math id="m235">
<mml:mrow>
<mml:mi mathvariant="normal">&#x3a6;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
<th align="center">
<inline-formula id="inf221">
<mml:math id="m236">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="right">0.05<sup>2</sup>
</td>
<td align="right">0.09</td>
<td align="right">468</td>
<td align="right">727.3</td>
<td align="right">0.643</td>
<td align="right">468</td>
<td align="right">1016.1</td>
<td align="right">0.461</td>
</tr>
<tr>
<td align="right">0.08<sup>2</sup>
</td>
<td align="right">0.22</td>
<td align="right">468</td>
<td align="right">882.3</td>
<td align="right">0.530</td>
<td align="right">468</td>
<td align="right">1285.8</td>
<td align="right">0.364</td>
</tr>
<tr>
<td align="right">0.11<sup>2</sup>
</td>
<td align="right">0.42</td>
<td align="right">468</td>
<td align="right">1338.2</td>
<td align="right">0.350</td>
<td align="right">468</td>
<td align="right">1595.7</td>
<td align="right">0.293</td>
</tr>
<tr>
<td align="right">0.14<sup>2</sup>
</td>
<td align="right">0.68</td>
<td align="right">468</td>
<td align="right">1753.9</td>
<td align="right">0.267</td>
<td align="right">468</td>
<td align="right">1899.7</td>
<td align="right">0.246</td>
</tr>
<tr>
<td align="right">0.17<sup>2</sup>
</td>
<td align="right">1.00</td>
<td align="right">468</td>
<td align="right">2191.7</td>
<td align="right">0.214</td>
<td align="right">468</td>
<td align="right">2111.3</td>
<td align="right">0.222</td>
</tr>
<tr>
<td align="right">0.20<sup>2</sup>
</td>
<td align="right">1.38</td>
<td align="right">468</td>
<td align="right">2906.4</td>
<td align="right">0.161</td>
<td align="right">468</td>
<td align="right">2386.7</td>
<td align="right">0.196</td>
</tr>
<tr>
<td align="right">0.23<sup>2</sup>
</td>
<td align="right">1.83</td>
<td align="right">468</td>
<td align="right">3337.2</td>
<td align="right">0.140</td>
<td align="right">468</td>
<td align="right">2567.0</td>
<td align="right">0.182</td>
</tr>
<tr>
<td align="right">0.26<sup>2</sup>
</td>
<td align="right">2.34</td>
<td align="right">468</td>
<td align="right">3567.1</td>
<td align="right">0.131</td>
<td align="right">468</td>
<td align="right">2811.7</td>
<td align="right">0.166</td>
</tr>
<tr>
<td align="right">0.29<sup>2</sup>
</td>
<td align="right">2.91</td>
<td align="right">468</td>
<td align="right">4272.9</td>
<td align="right">0.110</td>
<td align="right">468</td>
<td align="right">2990.2</td>
<td align="right">0.157</td>
</tr>
<tr>
<td align="right">0.32<sup>2</sup>
</td>
<td align="right">3.54</td>
<td align="right">467</td>
<td align="right">4723.8</td>
<td align="right">0.099</td>
<td align="right">468</td>
<td align="right">2925.0</td>
<td align="right">0.160</td>
</tr>
<tr>
<td align="right">0.35<sup>2</sup>
</td>
<td align="right">4.24</td>
<td align="right">467</td>
<td align="right">5069.9</td>
<td align="right">0.092</td>
<td align="right">468</td>
<td align="right">3162.1</td>
<td align="right">0.148</td>
</tr>
<tr>
<td align="right">0.38<sup>2</sup>
</td>
<td align="right">5.00</td>
<td align="right">466</td>
<td align="right">5443.8</td>
<td align="right">0.086</td>
<td align="right">468</td>
<td align="right">3288.5</td>
<td align="right">0.142</td>
</tr>
<tr>
<td align="right">0.41<sup>2</sup>
</td>
<td align="right">5.82</td>
<td align="right">462</td>
<td align="right">5884.6</td>
<td align="right">0.079</td>
<td align="right">468</td>
<td align="right">3418.1</td>
<td align="right">0.137</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As shown in <xref ref-type="fig" rid="F6">Figures 6A, D</xref>, the expected <inline-formula id="inf222">
<mml:math id="m237">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> values for unimodal cueing in condition E and multimodal cueing in condition AE asymptotically decreased as <inline-formula id="inf223">
<mml:math id="m238">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> increased in both cases with <inline-formula id="inf224">
<mml:math id="m239">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.01</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf225">
<mml:math id="m240">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:msup>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The <italic>medial effect</italic>, which was elicited in our behavioral data, was particularly observed for larger ratios of <inline-formula id="inf226">
<mml:math id="m241">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf227">
<mml:math id="m242">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>, as shown in <xref ref-type="fig" rid="F6">Figures 6D, F</xref>. Notably, when the <inline-formula id="inf228">
<mml:math id="m243">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> ratio was close to 1 under <inline-formula id="inf229">
<mml:math id="m244">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mn>0.01</mml:mn>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> (<xref ref-type="fig" rid="F6">Figures 6A, B</xref>), we observed the <italic>synergy effect</italic>, wherein <inline-formula id="inf230">
<mml:math id="m245">
<mml:mrow>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> with multimodal cueing was better than that with unimodal cueing. The magnitude of the <inline-formula id="inf231">
<mml:math id="m246">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>e</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> ratio indicates the bias level of the uncertainty of the electrostatic force sensation over the auditory sensation. Therefore, these results suggest that both <italic>medial</italic> and <italic>synergy effects</italic> were observable under the assumption of the typical multimodal integration model (<xref ref-type="bibr" rid="B11">Ernst and Banks, 2002</xref>; <xref ref-type="bibr" rid="B9">Ernst, 2006</xref>; <xref ref-type="bibr" rid="B10">2007</xref>) (<xref ref-type="fig" rid="F3">Figure 3</xref>), depending on the uncertainty bias for each sensation and head motion.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4 Discussion</title>
<p>We demonstrated the multimodal effects of AE cues on visual field guidance in 360&#xb0; VR, and found that both <italic>medial</italic> and <italic>synergy effects</italic> were observable depending on the uncertainty of the cue stimuli through the psychophysical experiment and the simulation analysis. Specifically, guidance performance with multimodal cueing is modulated by balancing the perceptual uncertainty elicited by each cue stimulus. We also demonstrated that the applicability of the electrostatic force-based stimulation method in VR applications; electrostatic stimulation through the corona charging gun allowed users to make large body movements. These results suggest that multimodal cueing with electrostatic force has sufficient potential to guide user behavior in 360&#xb0; VR gently, offering a highly immersive visual experience through spontaneous viewing.</p>
<p>We showed that electrostatic force can be used as a haptic cue to guide the visual field. However, the search performance did not reach that with the auditory cue, even though we selected cue intensities that varied equally in small ranges around the supra- and sub-thresholds, with no significant difference in the perceptual domain. In the informal post-experiment interviews, some participants reported that the sensation induced by the electrostatic force was attenuated, especially while moving. In addition, most participants reported that the auditory cue made it easier to identify the target location. This suggests that the haptic sensation was affected by body motion that inevitably accompanied the updating of the head direction. The increased uncertainty for the haptic sensation was estimated to be approximately five times greater than that for the auditory sensation, as suggested by the simulation results (<xref ref-type="fig" rid="F6">Figure 6F</xref>). Thus, the perception of changes in the stimulus intensity associated with visual field updates acts as a cue for estimating the target direction, which means that increasing the electrostatic field intensity such that it is strong enough to resist the effects of body motion could mitigate this uncertainty. As suggested by the simulation results presented in <xref ref-type="sec" rid="s3-2">Section 3.2</xref>, reducing the perceptual uncertainty improves the search performance. This finding has been overlooked in previous studies that mainly focused on visual field guidance using overt cue stimuli (<xref ref-type="bibr" rid="B17">Gruenefeld, Ennenga, et al., 2017a</xref>; <xref ref-type="bibr" rid="B18">Gruenefeld, El Ali, et al., 2017b</xref>; <xref ref-type="bibr" rid="B8">Danieau et al., 2017</xref>; <xref ref-type="bibr" rid="B16">Gruenefeld et al., 2018</xref>; <xref ref-type="bibr" rid="B19">2019</xref>; <xref ref-type="bibr" rid="B20">Harada and Ohyama, 2022</xref>). This results in the requirement for the property of cue stimuli to improve performance in multimodal visual field guidance.</p>
<p>The <italic>medial effect</italic> might have been counterintuitive because participants received more information regarding the target stimulus from multimodal cues than unimodal cues. Because the cues conveyed the same information, the <italic>synergy effect</italic> was more likely if participants used the received information properly. The simulation analysis showed that both effects could be observed under specific noise settings. This can also be explained theoretically: let <inline-formula id="inf232">
<mml:math id="m247">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf233">
<mml:math id="m248">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> be random variables for auditory and electrostatic force sensations, respectively. Then, <inline-formula id="inf234">
<mml:math id="m249">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf235">
<mml:math id="m250">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represent the variances for each sensation. According to the integrated perception model (<xref ref-type="bibr" rid="B11">Ernst and Banks, 2002</xref>; <xref ref-type="bibr" rid="B9">Ernst, 2006</xref>; <xref ref-type="bibr" rid="B10">2007</xref>), the total sensation variance can be expressed as<disp-formula id="e16">
<mml:math id="m251">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>C</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
<label>(16)</label>
</disp-formula>where <inline-formula id="inf236">
<mml:math id="m252">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> denotes the covariance between <inline-formula id="inf237">
<mml:math id="m253">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf238">
<mml:math id="m254">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. If <inline-formula id="inf239">
<mml:math id="m255">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf240">
<mml:math id="m256">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> are independent, i.e., <inline-formula id="inf241">
<mml:math id="m257">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, and <inline-formula id="inf242">
<mml:math id="m258">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf243">
<mml:math id="m259">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are equal, according to Eq. <xref ref-type="disp-formula" rid="e16">16</xref>, <inline-formula id="inf244">
<mml:math id="m260">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is less than both <inline-formula id="inf245">
<mml:math id="m261">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf246">
<mml:math id="m262">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, indicating a more efficient search performance than unimodal cueing (<italic>synergy effect</italic>) because smaller variances improve the performance. For example, if <inline-formula id="inf247">
<mml:math id="m263">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>5</mml:mn>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf248">
<mml:math id="m264">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> should be <inline-formula id="inf249">
<mml:math id="m265">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x22c5;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, suggesting intermediate performance if <inline-formula id="inf250">
<mml:math id="m266">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is used (<italic>medial effect</italic>). However, if <inline-formula id="inf251">
<mml:math id="m267">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf252">
<mml:math id="m268">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are not independent and <inline-formula id="inf253">
<mml:math id="m269">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> has a certain value, <inline-formula id="inf254">
<mml:math id="m270">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="&#x7c;">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> increases and the <italic>synergy effect</italic> fades. In reality, <inline-formula id="inf255">
<mml:math id="m271">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> in <xref ref-type="fig" rid="F3">Figure 3</xref> controlled the dependence between <inline-formula id="inf256">
<mml:math id="m272">
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf257">
<mml:math id="m273">
<mml:mrow>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, as the variance of <inline-formula id="inf258">
<mml:math id="m274">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-italic">&#x3f5;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is determined based on the observation of <italic>synergy</italic> or <italic>medial effects.</italic> These results support the validity of the integrated perception model shown in <xref ref-type="fig" rid="F3">Figure 3</xref> as the underlying mechanism of visual search tasks with multimodal cues.</p>
<p>Addressing the out-of-view problem has been a major challenge in 360&#xb0; VR video viewing (<xref ref-type="bibr" rid="B26">Lin Y. T. et al., 2017</xref>; <xref ref-type="bibr" rid="B47">Schmitz et al., 2020</xref>; <xref ref-type="bibr" rid="B57">Wallgrun et al., 2020</xref>; <xref ref-type="bibr" rid="B59">Yamaguchi et al., 2021</xref>). Gentle and diegetic guidance that does not interfere with the visual content has received substantial attention from VR content providers (<xref ref-type="bibr" rid="B37">Nielsen et al., 2016</xref>; <xref ref-type="bibr" rid="B48">Sheikh et al., 2016</xref>; <xref ref-type="bibr" rid="B46">Rothe et al., 2017</xref>; <xref ref-type="bibr" rid="B45">Rothe and Hu&#xdf;mann, 2018</xref>; <xref ref-type="bibr" rid="B3">Bala et al., 2019</xref>; <xref ref-type="bibr" rid="B54">Tong et al., 2019</xref>). This study showed that subtle cues using artificial electrostatic force can guide the visual field, thereby demonstrating the application potential for 360&#xb0; VR. Whereas previous studies using static electricity have severely limited the movements of the user (<xref ref-type="bibr" rid="B14">Fukushima and Kajimoto, 2012b</xref>; <xref ref-type="bibr" rid="B13">2012a</xref>; <xref ref-type="bibr" rid="B24">Karasawa and Kajimoto, 2021</xref>), the use of the corona discharge phenomenon mitigated this limitation. The simulation analysis using the computational model helped to provide an understanding of the mechanisms of multimodal cueing. Similar to the observations in this study, previous studies using non-overt cues with perceptual uncertainty have reported both positive and negative effects of multimodal cueing in 360&#xb0; VR (<xref ref-type="bibr" rid="B48">Sheikh et al., 2016</xref>; <xref ref-type="bibr" rid="B45">Rothe and Hu&#xdf;mann, 2018</xref>; <xref ref-type="bibr" rid="B3">Bala et al., 2019</xref>; <xref ref-type="bibr" rid="B27">Malpica, Serrano, Gutierrez, et al., 2020a</xref>). We believe that our results also provide a rational explanation for these previous findings.</p>
<p>However, this study had some limitations. Some participants exhibited insufficient sensitivity to the electrostatic force stimuli. Although their hair moved when they were exposed to static electricity, they reported low sensations, which may be caused by skin moisture or other factors; however, this phenomenon has not yet been investigated. Furthermore, as humans are incapable of electroreception, it is reasonable to believe that the mechanoreceptors in the skin are involved in providing the sensations (<xref ref-type="bibr" rid="B21">Horch et al., 1977</xref>; <xref ref-type="bibr" rid="B23">Johnson, 2001</xref>; <xref ref-type="bibr" rid="B60">Zimmerman et al., 2014</xref>); however, this must be investigated further. In addition, the wristband used to tether the participants to the ground may have restricted free body movement; this can be addressed by introducing an ionizer that remotely neutralizes the charge level (<xref ref-type="bibr" rid="B38">Ohsawa, 2005</xref>), thereby allowing participants to move freely. Finally, the results presented in this study were obtained under reductive conditions. While the results provide insight into stimulus design, further experiments are required to demonstrate the effectiveness in real-world VR applications such as video viewing and gaming, which will be the focus of our future study.</p>
<p>In future work, we will implement electrostatic stimulation in a VR application. We believe that haptic stimulation by electrostatic force could be used not only to guide the visual field, but also to enhance the user&#x2019;s subjective impression. Although this has not been discussed here, we have experimentally implemented a VR game wherein a user shoots zombies charged with static electricity approaching from all sides. The electrostatic force-based stimulus can result in unpleasant sensations. Other haptic stimuli, such as vibrations, could also be used to cue the zombies. However, we believe that these stimuli are too obvious and artificial, and may detract from the subjective quality of experience to a certain extent. The use of static electricity can result in an unsettling experience for users when charged zombies approach them from behind. Thus, by comparing the effects of electrostatic force and other haptic stimuli on subjective impressions, we will be able to demonstrate the availability of electrostatic force-based stimulation to provide a highly immersive experience.</p>
</sec>
<sec sec-type="conclusion" id="s5">
<title>5 Conclusion</title>
<p>We investigated the multimodal effects of auditory and electrostatic force-based haptic cues on visual field guidance in 360&#xb0; VR, demonstrating the potential for a visual field guidance method that does not interfere with the visual content. We found that modulating the degree of perceptual uncertainty for each cue improves the overall guidance performance under simultaneous multimodal cueing. Moreover, we presented a simple haptic stimulation method using only a single channel of a corona charging gun. In the future, we will increase the number of channels to present more complex stimulations in a larger area by dynamically controlling the electric fields, allowing for remote haptic stimulation under a six-degrees-of-freedom viewing condition. Finally, our results showed that multimodal stimuli have the potential to increase the richness in VR environments.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The raw data supporting the conclusion of this article will be made available by the authors, upon reasonable request.</p>
</sec>
<sec id="s7">
<title>Ethics statement</title>
<p>The studies involving humans were approved by Japan Broadcasting Corporation. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s8">
<title>Author contributions</title>
<p>YS: Conceptualization, Methodology, Formal analysis, Writing&#x2013;original draft, Writing&#x2013;review and editing. MH: Supervision, Writing&#x2013;review and editing. KK: Project administration, Writing&#x2013;review and editing.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of interest</title>
<p>Authors YS, MH, and KK were employed by Japan Broadcasting Corporation.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bailey</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sudarsanam</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Grimm</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Subtle gaze direction</article-title>. <source>ACM Trans. Graph.</source> <volume>28</volume> (<issue>4</issue>), <fpage>1</fpage>&#x2013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1145/1559755.1559757</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bala</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Masu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Nisi</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Nunes</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Cue control: interactive sound spatialization for 360&#xb0; videos</article-title>,&#x201d; in <source>Interactive storytelling. ICIDS 2018. Lecture notes in computer science, vol 11318</source> Editors <person-group person-group-type="editor">
<name>
<surname>Rouse</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Koenitz</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Haahr</surname>
<given-names>M.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>333</fpage>&#x2013;<lpage>337</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-04028-4_36</pub-id>
</citation>
</ref>
<ref id="B3">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bala</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Masu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Nisi</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Nunes</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>When the elephant trumps</article-title>,&#x201d; in <conf-name>Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</conf-name>, <conf-loc>Glasgow, Scotland, UK</conf-loc>, <conf-date>May 4-9, 2019</conf-date>, <fpage>1</fpage>&#x2013;<lpage>13</lpage>.</citation>
</ref>
<ref id="B4">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chang</surname>
<given-names>H.-Y.</given-names>
</name>
<name>
<surname>Tseng</surname>
<given-names>W.-J.</given-names>
</name>
<name>
<surname>Tsai</surname>
<given-names>C.-E.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.-Y.</given-names>
</name>
<name>
<surname>Peiris</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>FacePush: introducing normal force on face with head-mounted displays</article-title>,&#x201d; in <conf-name>Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology</conf-name>, <fpage>927</fpage>&#x2013;<lpage>935</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chao</surname>
<given-names>F. Y.</given-names>
</name>
<name>
<surname>Ozcinar</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Zerman</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hamidouche</surname>
<given-names>W.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). &#x201c;<article-title>Audio-visual perception of omnidirectional video for virtual reality applications</article-title>,&#x201d; in <conf-name>2020 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2020</conf-name>, <fpage>2</fpage>&#x2013;<lpage>7</lpage>.</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cooper</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Milella</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Pinto</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Cant</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>White</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment</article-title>. <source>PLOS ONE</source> <volume>13</volume> (<issue>2</issue>), <fpage>e0191846</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0191846</pub-id>
</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dalgarno</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>M. J. W.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>What are the learning affordances of 3&#x2010;D virtual environments?</article-title> <source>Br. J. Educ. Technol.</source> <volume>41</volume> (<issue>1</issue>), <fpage>10</fpage>&#x2013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-8535.2009.01038.x</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Danieau</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Guillo</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dor&#xe9;</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Attention guidance for immersive video content in head-mounted displays</article-title>,&#x201d; in <conf-name>2017 IEEE Virtual Reality (VR)</conf-name>, <fpage>205</fpage>&#x2013;<lpage>206</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group> (<year>2006</year>). &#x201c;<article-title>A bayesian view on multimodal cue integration</article-title>,&#x201d; in <source>Human body perception from the inside out: advances in visual cognition</source> (<publisher-name>Oxford University Press</publisher-name>), <fpage>105</fpage>&#x2013;<lpage>131</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Learning to integrate arbitrary signals from vision and touch</article-title>. <source>J. Vis.</source> <volume>7</volume> (<issue>5</issue>), <fpage>7</fpage>. <pub-id pub-id-type="doi">10.1167/7.5.7</pub-id>
</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>. <source>Nature</source> <volume>415</volume> (<issue>6870</issue>), <fpage>429</fpage>&#x2013;<lpage>433</lpage>. <pub-id pub-id-type="doi">10.1038/415429a</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Flach</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Holden</surname>
<given-names>J. G.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>The reality of experience: gibson&#x2019;s way</article-title>. <source>Presence</source> <volume>7</volume> (<issue>1</issue>), <fpage>90</fpage>&#x2013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1162/105474698565550</pub-id>
</citation>
</ref>
<ref id="B13">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Fukushima</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kajimoto</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2012a</year>). &#x201c;<article-title>Chilly chair: facilitating an emotional feeling with artificial piloerection</article-title>,&#x201d; in <source>ACM SIGGRAPH 2012 emerging Technologies (SIGGRAPH &#x2019;12), 1, article</source>, <fpage>5</fpage>&#x2013;<lpage>1</lpage>. <pub-id pub-id-type="doi">10.1145/2343456.2343461</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fukushima</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kajimoto</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2012b</year>). &#x201c;<article-title>Facilitating a surprised feeling by artificial control of piloerection on the forearm</article-title>,&#x201d; in <conf-name>Proceedings of the 3rd Augmented Human International Conference (AH &#x2019;12), Article 8</conf-name>, <fpage>1</fpage>&#x2013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group> (<year>1979</year>) <source>The ecological approach to visual perception 3</source>. <publisher-name>Houghton Mifflin</publisher-name>.</citation>
</ref>
<ref id="B16">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gruenefeld</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>El Ali</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Boll</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Heuten</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Beyond halo and wedge: visualizing out-of-view objects on head-mounted virtual and augmented reality devices</article-title>,&#x201d; in <conf-name>Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI &#x2019;18)</conf-name>, <fpage>1</fpage>&#x2013;<lpage>11</lpage>.</citation>
</ref>
<ref id="B17">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gruenefeld</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>El Ali</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Heuten</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Boll</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2017a</year>). &#x201c;<article-title>Visualizing out-of-view objects in head-mounted augmented reality</article-title>,&#x201d; in <conf-name>Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI &#x2019;17), Article 87</conf-name>, <fpage>1</fpage>&#x2013;<lpage>7</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gruenefeld</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Ennenga</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ali</surname>
<given-names>A.El</given-names>
</name>
<name>
<surname>Heuten</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Boll</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2017b</year>). &#x201c;<article-title>EyeSee360: designing a visualization technique for out-of-view objects in head-mounted augmented reality</article-title>,&#x201d; in <conf-name>Proceedings of the 5th Symposium on Spatial User Interaction (SUI &#x2019;17)</conf-name>, <fpage>109</fpage>&#x2013;<lpage>118</lpage>.</citation>
</ref>
<ref id="B19">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gruenefeld</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Koethe</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Lange</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Weirb</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Heuten</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Comparing techniques for visualizing moving out-of-view objects in head-mounted virtual reality</article-title>,&#x201d; in <conf-name>2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)</conf-name>, <fpage>742</fpage>&#x2013;<lpage>746</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harada</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Ohyama</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Quantitative evaluation of visual guidance effects for 360-degree directions</article-title>. <source>Virtual Real.</source> <volume>26</volume> (<issue>2</issue>), <fpage>759</fpage>&#x2013;<lpage>770</lpage>. <pub-id pub-id-type="doi">10.1007/s10055-021-00574-7</pub-id>
</citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Horch</surname>
<given-names>K. W.</given-names>
</name>
<name>
<surname>Tuckett</surname>
<given-names>R. P.</given-names>
</name>
<name>
<surname>Burgess</surname>
<given-names>P. R.</given-names>
</name>
</person-group> (<year>1977</year>). <article-title>A key to the classification of cutaneous mechanoreceptors</article-title>. <source>J. Investigative Dermatology</source> <volume>69</volume> (<issue>1</issue>), <fpage>75</fpage>&#x2013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1111/1523-1747.ep12497887</pub-id>
</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>H&#xfc;ttner</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>von Fersen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Miersch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Dehnhardt</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2023</year>). <article-title>Passive electroreception in bottlenose dolphins (<italic>Tursiops truncatus</italic>): implication for micro- and large-scale orientation</article-title>. <source>J. Exp. Biol.</source> <volume>226</volume> (<issue>22</issue>), <fpage>jeb245845</fpage>. <pub-id pub-id-type="doi">10.1242/jeb.245845</pub-id>
</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>The roles and functions of cutaneous mechanoreceptors</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>11</volume> (<issue>4</issue>), <fpage>455</fpage>&#x2013;<lpage>461</lpage>. <pub-id pub-id-type="doi">10.1016/S0959-4388(00)00234-8</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Karasawa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kajimoto</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Presentation of a feeling of presence using an electrostatic field: presence-like sensation presentation using an electrostatic field</article-title>,&#x201d; in <conf-name>Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA &#x2019;21), Article 285</conf-name>, <fpage>1</fpage>&#x2013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B25">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>Y.-C.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>Y.-J.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>H.-N.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>H.-T.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>C.-W.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Tell me where to look</article-title>,&#x201d; in <conf-name>Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems</conf-name>, <fpage>2535</fpage>&#x2013;<lpage>2545</lpage>.</citation>
</ref>
<ref id="B26">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>Y. T.</given-names>
</name>
<name>
<surname>Liao</surname>
<given-names>Y. C.</given-names>
</name>
<name>
<surname>Teng</surname>
<given-names>S. Y.</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>Y. J.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>B. Y.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Outside-in: visualizing out-of-sight regions-of-interest in a 360 video using spatial picture-in-picture previews</article-title>,&#x201d; in <conf-name>Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST &#x2019;17)</conf-name>, <fpage>255</fpage>&#x2013;<lpage>265</lpage>.</citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Malpica</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Serrano</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Allue</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bedia</surname>
<given-names>M. G.</given-names>
</name>
<name>
<surname>Masia</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2020a</year>). <article-title>Crossmodal perception in virtual reality</article-title>. <source>Multimedia Tools Appl.</source> <volume>79</volume> (<issue>5&#x2013;6</issue>), <fpage>3311</fpage>&#x2013;<lpage>3331</lpage>. <pub-id pub-id-type="doi">10.1007/s11042-019-7331-z</pub-id>
</citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Malpica</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Serrano</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gutierrez</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Masia</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2020b</year>). <article-title>Auditory stimuli degrade visual performance in virtual reality</article-title>. <source>Sci. Rep.</source> <volume>10</volume> (<issue>1</issue>), <fpage>12363</fpage>&#x2013;<lpage>12369</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-69135-3</pub-id>
</citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Malpica</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gutierrez</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Masia</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Serrano</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Multimodality in VR: a survey</article-title>. <source>ACM Comput. Surv.</source> <volume>54</volume> (<issue>10s</issue>), <fpage>1</fpage>&#x2013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1145/3508361</pub-id>
</citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Masia</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Camon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gutierrez</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Serrano</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Influence of directional sound cues on users&#x2019; exploration across 360&#xb0; movie cuts</article-title>. <source>IEEE Comput. Graph. Appl.</source> <volume>41</volume> (<issue>4</issue>), <fpage>64</fpage>&#x2013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1109/MCG.2021.3064688</pub-id>
</citation>
</ref>
<ref id="B31">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Matsuda</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nozawa</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Takata</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Izumihara</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rekimoto</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>HapticPointer</article-title>,&#x201d; in <conf-name>Proceedings of the Augmented Humans International Conference (AHs &#x2019;20)</conf-name>, <fpage>1</fpage>&#x2013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>McElree</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Carrasco</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>The temporal dynamics of visual search: evidence for parallel processing in feature and conjunction searches</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>25</volume> (<issue>6</issue>), <fpage>1517</fpage>&#x2013;<lpage>1539</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.25.6.1517</pub-id>
</citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Melo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goncalves</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Monteiro</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Coelho</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Vasconcelos-Raposo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bessa</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Do multisensory stimuli benefit the virtual reality experience? A systematic review</article-title>. <source>IEEE Trans. Vis. Comput. Graph.</source> <volume>28</volume> (<issue>2</issue>), <fpage>1428</fpage>&#x2013;<lpage>1442</lpage>. <pub-id pub-id-type="doi">10.1109/TVCG.2020.3010088</pub-id>
</citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mikropoulos</surname>
<given-names>T. A.</given-names>
</name>
<name>
<surname>Natsis</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Educational virtual environments: a ten-year review of empirical research (1999&#x2013;2009)</article-title>. <source>Comput. Educ.</source> <volume>56</volume> (<issue>3</issue>), <fpage>769</fpage>&#x2013;<lpage>780</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2010.10.020</pub-id>
</citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murray</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Qiao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Muntean</surname>
<given-names>G. M.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Olfaction-enhanced multimedia: a survey of application domains, displays, and research challenges</article-title>. <source>ACM Comput. Surv.</source> <volume>48</volume> (<issue>4</issue>), <fpage>1</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1145/2816454</pub-id>
</citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newton</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Gill</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Kajiura</surname>
<given-names>S. M.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Electroreception in marine fishes: chondrichthyans</article-title>. <source>J. Fish Biol.</source> <volume>95</volume> (<issue>1</issue>), <fpage>135</fpage>&#x2013;<lpage>154</lpage>. <pub-id pub-id-type="doi">10.1111/jfb.14068</pub-id>
</citation>
</ref>
<ref id="B37">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Nielsen</surname>
<given-names>L. T.</given-names>
</name>
<name>
<surname>M&#xf8;ller</surname>
<given-names>M. B.</given-names>
</name>
<name>
<surname>Hartmeyer</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Ljung</surname>
<given-names>T. C. M.</given-names>
</name>
<name>
<surname>Nilsson</surname>
<given-names>N. C.</given-names>
</name>
<name>
<surname>Nordahl</surname>
<given-names>R.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). &#x201c;<article-title>Missing the point</article-title>,&#x201d; in <conf-name>Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology</conf-name>, <fpage>229</fpage>&#x2013;<lpage>232</lpage>.</citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ohsawa</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2005</year>). <article-title>Modeling of charge neutralization by ionizer</article-title>. <source>J. Electrost.</source> <volume>63</volume> (<issue>6&#x2013;10</issue>), <fpage>767</fpage>&#x2013;<lpage>773</lpage>. <pub-id pub-id-type="doi">10.1016/j.elstat.2005.03.043</pub-id>
</citation>
</ref>
<ref id="B39">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Pavel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hartmann</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Agrawala</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Shot orientation controls for interactive cinematography with 360&#xb0; video</article-title>,&#x201d; in <conf-name>Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST &#x2019;17)</conf-name>, <fpage>289</fpage>&#x2013;<lpage>297</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Proske</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Gregory</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Iggo</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>Sensory receptors in monotremes</article-title>. <source>Philosophical Trans. R. Soc. B Biol. Sci.</source> <volume>353</volume> (<issue>1372</issue>), <fpage>1187</fpage>&#x2013;<lpage>1198</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.1998.0275</pub-id>
</citation>
</ref>
<ref id="B41">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ranasinghe</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Karwita</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tolley</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Do</surname>
<given-names>E. Y.-L.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Ambiotherm</article-title>,&#x201d; in <conf-name>Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems</conf-name>, <fpage>1731</fpage>&#x2013;<lpage>1742</lpage>.</citation>
</ref>
<ref id="B42">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ranasinghe</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Thi Ngoc Tram</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Koh</surname>
<given-names>K. C. R.</given-names>
</name>
<name>
<surname>Tolley</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Karwita</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). &#x201c;<article-title>Season traveller</article-title>,&#x201d; in <conf-name>Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</conf-name>, <fpage>1</fpage>&#x2013;<lpage>13</lpage>.</citation>
</ref>
<ref id="B43">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rothe</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Althammer</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Khamis</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>GazeRecall: using gaze direction to increase recall of details in cinematic virtual reality</article-title>,&#x201d; in <conf-name>Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia</conf-name>, <fpage>115</fpage>&#x2013;<lpage>119</lpage>.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rothe</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Buschek</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hu&#xdf;mann</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Guidance in cinematic virtual reality-taxonomy, research status and challenges</article-title>. <source>Multimodal Technol. Interact.</source> <volume>3</volume> (<issue>1</issue>), <fpage>19</fpage>. <pub-id pub-id-type="doi">10.3390/mti3010019</pub-id>
</citation>
</ref>
<ref id="B45">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Rothe</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hu&#xdf;mann</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Guiding the viewer in cinematic virtual reality by diegetic cues</article-title>,&#x201d; in <source>Augmented reality, virtual reality, and computer graphics. AVR 2018. Lecture notes in computer science</source> Editors <person-group person-group-type="editor">
<name>
<surname>De Paolis,</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bourdot</surname>
<given-names>P.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>101</fpage>&#x2013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-95270-3_7</pub-id>
</citation>
</ref>
<ref id="B46">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rothe</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hu&#xdf;mann</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Allary</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Diegetic cues for guiding the viewer in cinematic virtual reality</article-title>,&#x201d; in <conf-name>Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology</conf-name>, <fpage>1</fpage>&#x2013;<lpage>2</lpage>.</citation>
</ref>
<ref id="B47">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Schmitz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Macquarrie</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Julier</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Binetti</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Steed</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Directing versus attracting attention: exploring the effectiveness of central and peripheral cues in panoramic videos</article-title>,&#x201d; in <conf-name>Proceedings - 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2020</conf-name>, <fpage>63</fpage>&#x2013;<lpage>72</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sheikh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Watson</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Directing attention in 360-degree video</article-title>. <source>IBC 2016 Conf.</source>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1049/ibc.2016.0029</pub-id>
</citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments</article-title>. <source>Philosophical Trans. R. Soc. B Biol. Sci.</source> <volume>364</volume> (<issue>1535</issue>), <fpage>3549</fpage>&#x2013;<lpage>3557</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2009.0138</pub-id>
</citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slater</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Banakou</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Beacco</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gallego</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Macia-Varela</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Oliva</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>A separate reality: an update on place illusion and plausibility in virtual reality</article-title>. <source>Front. Virtual Real.</source> <volume>3</volume>. <pub-id pub-id-type="doi">10.3389/frvir.2022.914392</pub-id>
</citation>
</ref>
<ref id="B51">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Crossmodal correspondences: a tutorial review</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>73</volume> (<issue>4</issue>), <fpage>971</fpage>&#x2013;<lpage>995</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-010-0073-7</pub-id>
</citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suzuki</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Abe</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sato</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Proposal of perception method of existence of objects in 3D space using quasi-electrostatic field</article-title>. <source>Int. Conf. Human-Computer Interact.</source>, <fpage>561</fpage>&#x2013;<lpage>571</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-49760-6_40</pub-id>
</citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanaka</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lopes</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Electrical head actuation: enabling interactive systems to directly manipulate head orientation</article-title>. <source>Proc. 2022 CHI Conf. Hum. Factors Comput. Syst.</source> <volume>1</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1145/3491102.3501910</pub-id>
</citation>
</ref>
<ref id="B54">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Tong</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lindeman</surname>
<given-names>R. W.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Action units: directing user attention in 360-degree video based VR</article-title>,&#x201d; in <conf-name>25th ACM Symposium on Virtual Reality Software and Technology</conf-name>, <fpage>1</fpage>&#x2013;<lpage>2</lpage>.</citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Treisman</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Gelade</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>1980</year>). <article-title>A feature-integration theory of attention</article-title>. <source>Cogn. Psychol.</source> <volume>12</volume> (<issue>1</issue>), <fpage>97</fpage>&#x2013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0285(80)90005-5</pub-id>
</citation>
</ref>
<ref id="B56">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>B. N.</given-names>
</name>
<name>
<surname>Lindsay</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2003</year>). &#x201c;<article-title>Effect of beacon sounds on navigation performance in a virtual reality environment</article-title>,&#x201d; in <conf-name>Proceedings of the 9th International Conference on Auditory Display (ICAD2003), July</conf-name>, <fpage>204</fpage>&#x2013;<lpage>207</lpage>.</citation>
</ref>
<ref id="B57">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Wallgrun</surname>
<given-names>J. O.</given-names>
</name>
<name>
<surname>Bagher</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Sajjadi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Klippel</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>A comparison of visual attention guiding approaches for 360&#xb0; image-based VR tours</article-title>,&#x201d; in <conf-name>2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)</conf-name>, <fpage>83</fpage>&#x2013;<lpage>91</lpage>.</citation>
</ref>
<ref id="B58">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hou</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Survey on multisensory feedback virtual reality dental training systems</article-title>. <source>Eur. J. Dent. Educ.</source> <volume>20</volume> (<issue>4</issue>), <fpage>248</fpage>&#x2013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1111/eje.12173</pub-id>
</citation>
</ref>
<ref id="B59">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yamaguchi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ogawa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Narumi</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Now I&#x2019;m not afraid: reducing fear of missing out in 360&#xb0; videos on a head-mounted display using a panoramic thumbnail</article-title>,&#x201d; in <conf-name>2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)</conf-name>, <fpage>176</fpage>&#x2013;<lpage>183</lpage>.</citation>
</ref>
<ref id="B60">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zimmerman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bai</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ginty</surname>
<given-names>D. D.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>The gentle touch receptors of mammalian skin</article-title>. <source>Science</source> <volume>346</volume> (<issue>6212</issue>), <fpage>950</fpage>&#x2013;<lpage>954</lpage>. <pub-id pub-id-type="doi">10.1126/science.1254229</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>