- 1Department of Biomedical Engineering, Ben-Gurion University, Be’er Sheva, Israel
- 2School of Brain Science, Ben-Gurion University, Be’er Sheva, Israel
Introduction: As we walk we perceive our motion by external channels such as vision, and by internal ones such as the vestibular and proprioceptive senses. But what happens when these channels offer contradicting information? Previous work has shown that by manipulating visual gain during movement the user’s path can be redirected, a procedure known as redirected walking. While the behavioral dominance of visual cues on the immediate path has been well demonstrated, potential residual effects of the internal senses on path integration have not been well quantified - will it disrupt participant’s answer in the visual reference frame or even bias it towards the correct answer by the internal cues reference frame as part of a weighted integration process? Or will vision dominate to the point where it suppresses the other clashing inputs? Furthermore, it is unclear to what extent such effects might be consistent within and across individuals.
Methods: Here, we use the classic triangle completion task combined with redirected walking to quantify this balance and test for accuracy in the visual reference frame vs. the idiothetic reference frame.
Results: We find that as expected vision dominates participants' conscious perception, that accuracy within the idiothetic reference frame is low on average compared to the control of performing the tasks without redirection, and that it is modulated by the level of redirected gain in a way which is generally stable within individuals. However, we also find significant individual differences across participants with some even having an opposite strong and stable bias towards the idiothetic reference frame.
Discussion: These findings offer insight into the basic science of human navigation and multisensory integration, and hold implications for practical applications in virtual environment design and locomotion with redirected walking. Specifically, VR developers and researchers should keep in mind that just because a user’s path is being properly redirected, it does not mean that their real-world physical path does not leave traces in their internal representations of space and may potentially have residual effects on their spatial memory in the individual participant level.
1 Introduction
Typically, when humans walk from one point to another, our distal sensory channels, such as vision, are aligned with our internal idiothetic sensory channels, such as our proprioception and vestibular senses. This gives us a clear multisensory representation of our path from start to endpoint in which the sensory information is integrated across channels to increase precision. The integration between these sensory channels has been studied extensively for momentary clashes (Zhong, 2022) with several models offered such as optimal integration models (Bayesian, Maximum Likelihood (Ernst and Banks, 2002; Oess et al., 2020)) and competition/race models (Chua et al., 2022). However, what happens when the information across these channels clashes in a sustained and continuous fashion, leading to different perceived locations in each one? Will one of the reference frames take over and suppress the other or will they be integrated to a new perceived location?
On the one hand, race and competition models would suggest that when one sensory channel has high reliability while others offer clashing information, these clashing channels will lose the competition and not influence behavior. On the other hand, optimal integration models such as MLE and Bayes would suggest that while the clashing model would receive a low weight it might still have a residual effect on the results.
Situations in which our sensory channels clashed over prolonged periods of time are rare and mainly limited to unique setups such as in theme parks (Lee et al., 2023; Ramy and Herdman, 2023). However, in recent decades advancements in virtual reality (VR) have made this not only possible but increasingly common even during natural-feeling scenarios. One such example is a locomotion technique called redirected walking (RDW) which unfortunately has not received enough attention outside of the VR community.
RDW is a virtual reality locomotion technique that was developed to transform small environments in physical space into large or even infinite virtual environments (Razzaque et al., 2001; Li et al., 2022). This technique involves manipulating the environment around the player in subtle manners such as small incremental rotational and or translational gains, such that the users believe they are moving in a different manner than their actual movement in physical space based on their visual perception. This tool has seen growing use from its introduction over 20 years ago (Razzaque et al., 2001) to recent work demonstrating predicting future user positions (Jeon et al., 2024) and combining auditory cues (Ogawa et al., 2024). This tool has been used with multisensory research to quantitatively measure the detection thresholds of rotational, translational and curvature gains (Steinicke et al., 2010) which allows informed designs of RDW algorithms maximizing the gains while remaining imperceptible by the users. The behavioral dominance of the visual information has been shown in other ways beyond the basic function of the algorithm - for example, user orientation under rotation gains has been shown to align participant’s perception of physical targets with the virtual visual reference frame, suggesting that visual reorientation in virtual space (RDW) carries over to the user’s real-world heading model (Suma et al., 2011).
RDW techniques have been extensively studied and perfected over the past 2 decades, in attempts to improve usability, ease of development, and steering algorithms that divert the user from physical obstacles. A few notable contributions to the field were made to improve upon the classic Steer to Center (S2C) approach, that actively rotates the user towards the center of the room. Among these are model predictive controllers (Nescher et al., 2014) that plans and applies redirection according to user trajectory predictions. More recent steering algorithms include a deep reinforcement learning model that outputs rotation, translation and curvature gains from the user’s position in the tracking space, trained on simulated random walk paths (Strauss et al., 2020). These models were often shown to outperform the classic S2C algorithms.
Modern predictive redirected walking techniques were also used to tackle the problem of multiuser scenarios collocated in a physical space (Hirt et al., 2024). Hirt et al. suggested new predictive techniques based on clothoid trajectories and planning approaches built on artificial potential fields, and presented their validity in a multiuser study, significantly outperforming other tested algorithms.
By utilizing rotational gains RDW, we can separate the participants movement into two separate reference frames each with its own path - users are taking one path in the reference frame of their internal senses, and a second path through their visual reference frame. While this method has been implemented in a series of games and in experiments focusing on perception and nausea, its effects on other cognitive aspects such as the user’s cognitive map and spatial memory have only started to be explored in depth recently. For example, a recent study compared spatial memory and navigation performance in RDW vs. regular walking in immersive VR and showed no significant differences within the virtual environment between them, suggesting that RDW’s visual dominance was enough to match performance for these measures (De Back et al., 2025). However this study compared only within the virtual environment and not compared to other reference frames such as the actual physical location. Another example tested spatial tasks within the virtual environment from the aspect of redirected walking quality, comparing between with/without redirection and also finding dominance of the visual reference frame, but did not try to quantify the effects of the internal reference frames (Hodgson et al., 2008).
On the other hand, while vision typically dominates in VR it is also well established that spatial tasks (Maidenbaum et al., 2025), and path integration in particular (Chance et al., 1998), are improved by adding internal reference frame cues by physically moving. This demonstrates that these cues are not just ignored by vision during typical VR navigation but rather are integrated into a joint more accurate percept - however it is unclear how this integration is affected by a sustained multisensory clash.
Thus, here we will utilize a standard task for testing path integration - triangle completion (Loomis et al., 1993; Anson et al., 2019) - to test how RDW affects the user’s internal multisensory map, and their spatial behavior based on it as they attempt to return to a point of origin in each sensory reference frame while the visual and idiothetic sensory channels are disentangled. While homing tasks such as triangle completion were used extensively for path integration research (Zhong, 2022; Loomis et al., 1993; Anson et al., 2019), including in virtual reality (Chance et al., 1998; Cinelli and Michael, 2018), and while redirected walking have been well studied (Razzaque et al., 2001; Li et al., 2022; Jeon et al., 2024; Ogawa et al., 2024), to the best of our knowledge, there is no paper combining the two in a quantitative manner. A notable mention that explores a similar idea, compared path integration between age groups by utilizing a single specific shift in the virtual environment misaligning the two reference frames to try and pick apart the sensory channels (Shayman et al., 2024). In their work, the virtual environment was shifted once by 15°, rather than continuously as done in RDW. Our research expands on this idea with a continuous rotational gain throughout the trials rather than a single discrete change, allowing us to further explore the cue separation.
Another important factor is individual differences - most previous work in this direction focused only on group level performance, leaving it unclear how consistent users were across different levels of bias, and what percentage of users employed which subconscious multisensory integration strategy.
Studying these interactions will further our basic knowledge of human navigation and spatial behavior, multisensory integration, and path integration, and offer applied potential and implications for future development of this category of locomotion techniques (RDW) in the growing field of VR.
As established, we will likely see that vision dominates the participants’ paths in this study as well, however the question remains if residual effects from the idiothetic sensory channels could be detected in a spatial memory task, and how consistent participants behavior will be in regard to the balance between these two sensory channels.
2 Materials and methods
2.1 Participants
23 participants took part in our experiment (10 females, age range 18–40). This number of participants was chosen based on a power calculation from the first four participants, aimed at achieving statistical power (β = 0.8) for a large effect size of Cohen’s D = 0.8 on within-sample paired analysis (which would require a minimum of n = 15). This group size matches those from previous literature such as n = 10, n = 18 (Ogawa et al., 2024); n = 16 (Rietzler et al., 2020); n = 20 (Weller et al., 2022).
As will be detailed below we also verified that our group size is powered for the final effect which we found which was of a slightly smaller magnitude (D = 0.66, requiring a minimum of n = 19 participants).
2.2 Ethics
The experiment was approved by the IRB of Ben Gurion University (approval: 2254–1) and all participants provided their informed consent.
2.3 Tools
We designed our virtual reality environment using the Unity 3D engine (version 2022.3.4f1) (Unity Technologies, 2022) and C#. In terms of hardware, we utilized the Meta Quest 3 standalone virtual reality Head Mounted Display (HMD).
2.4 Redirected walking algorithm
We implemented a rotational gain RDW algorithm (Razzaque et al., 2001). The algorithm tracks the rotation of the participant in real-time and rotates the environment clockwise in small increments once the participant’s rotation surpasses a configurable threshold. The subtle rotations of the environment are imperceivable to the participant; such that the path in the virtual environment (visual channel) and the path in the physical space (idiothetic channel) are controllably unlinked. We have implemented two settings of rotational gains, which increase the players rotation by 104% in the lower gain setting, and 108% in the higher gain setting, provided that the rotation of the participant surpassed a threshold of 1° between two frames. These thresholds and gain values are within those typically used in RDW, and were chosen based on empirical pilot testing in the lab such that users would not be conscious of the RDW on the one hand, but also that it would offer a meaningful distance between the two reference frame’s targets on the other.
2.5 Triangle completion paradigm
We implemented a virtual reality version of the classic triangle completion paradigm (Zhong, 2022). We created a desert environment which the participant is placed in. They must first walk to a floating coin (Figure 1). The participants were instructed to walk in a straight line. Upon arrival, the first coin vanishes and is replaced by a second floating coin. The participants were then instructed to locate the second coin, rotate to head in its direction, and walk to it. Then the second coin vanishes, and participants must complete the triangle by rotating a second time to head to their estimated direction of the starting location, and return there in a straight line. The participants had to confirm that they had reached their estimated origin point via a button press. Participants wore immersive headsets and physically walked throughout the task.
Figure 1. Screenshots from the virtual environment. (A) The image displays the objective (coin) in the center, the virtual landscape and the prompts provided to the participants. (B) A closeup of the coin. (C) The instruction to move to the second coin.
All triangles were designed such that participants had to turn counterclockwise to reach the second coin and the starting position to minimize variability between the triangles, but this was not disclosed to the participants.
Throughout the trials, a script logs the position and rotation of the participant, the rotation of the environment, and several flags for the analysis, such as activation state of the objectives, and the confirmation of the participant that they reached their origin point.
At the beginning of each trial the participants performed a standard Meta calibration of their position which also automatically corrects for user height. IPD was only adjusted for participants who experienced blurred vision. The study conductor verified that the participants had no view of the physical environment by adjusting the headset straps until the participants reported they could not see it. The experiment environments and time were carefully selected to minimize ambient noise, but was not specifically controlled for.
2.6 Experiment design
Each participant performed two blocks of trials. The first block consisted of 24 trials, comprising 12 triangles (detailed in Supplementary Table S1) such that each of the triangles was repeated twice, once with rotational gain RDW and once without rotational gains. The gain in these trials was set to the lower setting of 104% to ensure the changes to the environment remain subtle. The second block consisted of 12 trials, in which the six triangles (Supplementary Table S1) – a randomly selected set from the 12 triangles of the first block that remained consistent across participants–were each repeated twice with the same conditions as the first block, but with a higher rotational gain of 108%. These gains were applied each time the participant turned, towards the second coin after reaching the first coin and towards the starting location after reaching the second coin. Both the lower and higher gains are relatively low and conservative to avoid any detection of the redirection. The order of the trials within each block was randomized and block order varied between participants.
The unlinking of the idiothetic sensory channel from the visual results in 2 separate paths. The idiothetic path of the participants is the position of the HMD in global physical space, while the visual path is within the visual environment’s coordinates. In the trials without the rotational gains, the paths are of course aligned and identical.
Following an informed consent signature and a brief explanation on the task at hand, each participant donned the HMD and completed all trials. Following the trials, the participants filled out a post experiment questionnaire for demographic information, experience related to the task, and general sensation during the experiment. Additionally, the participants filled out the Santa Barbara Sense Of Direction (SBSOD) (Hegarty et al., 2002), a questionnaire used to extract self-reported general sense of direction and navigation skills. These will later be used as part of our analysis to test for correlations between potential individual differences in the participants objective performance to their demographics, experience and self-reported spatial abilities.
2.7 Reproducibility
The authors declare that the paradigm and all of the data and code will be made available upon publication at OSF (https://osf.io/j23uz/overview?view_only=b4c9e7b19c4e4ae0a42263ef4618a54f) to enable easy reproducibility.
3 Results
3.1 Subjective reactions
All participants were able to successfully complete the experiment with no adverse effects or drop outs. The participants did not notice the manipulation and could not tell trials apart by condition - No participant reported that they noticed any unusual changes to the environment during the trials. Participants reported little to no nausea (on a scale of 1 - no nausea to 5 - strong nausea sensation) with an average score of 1.35 ± 0.65 (Mean ± SD, p = 0.02), and disorientation (on a scale of 1 - no disorientation to 5 - strong disorientation) with an average score of 1.8 ± 0.85 (Mean ± SD, p = 0.0009).
A few of the participants volunteered information about their navigation strategy during the trials. Out of these participants, a few reported to have counted their steps in an attempt to quantify their path, and one reported trying to cling to a specific pixel in the ground or a distinct shape in the hills of the environment’s background to try and guide their way.
3.2 Objective performance
We analyzed the log files from the experiment and extracted the participants’ paths (Figures 2, 3) and error distances in each sensory reference frame.
Figure 2. This figure illustrates the path (blue line) in the Redirected condition (A) and the control condition (B) in one of the trials of the first block. The green circle is the original starting point and the target endpoint of the idiothetic reference frame, the red x is the true final position, and the yellow triangle is the “moved” starting point - i.e., the target endpoint in the environmental coordinates. The starting position for the visual reference frame “moves” with the rotations of the environment, and its path is marked in a yellow dashed line.
Figure 3. This figure illustrates the path (blue and yellow lines) in the idiothetic reference frame (A) and in the visual reference frame (B) in one of the trials of the first block. The green circle is the original starting point and the target endpoint, the red x is the true final position.
Our main metric of interest is the distance between each path’s end to the corresponding origin point participants were aiming at within that reference frame. This in turn, when compared to the error of the control condition (no redirection), should help us understand the contribution of each sensory channel to human path integration.
We found that the error distance of the answer in the idiothetic reference frame was significantly higher than that of the control for the lower rotational gain settings (mean ± SD; 1.24 ± 0.31m, 1.06 ± 0.23 m respectively, Wilcoxon signed-rank, p < 0.001, Figure 4). These results replicated in the increased gain as well (mean ± SD; 1.46 ± 0.43m, 1.10 ± 0.36m, p < 0.001 Figure 4). Looking at the answers in the visual reference frame reveals that the distance error mean is slightly higher than that of the control in both settings, but no significant difference was observed between these groups (mean ± SD; 1.11 ± 0.34m, 1.15 ± 0.62 m respectively, both p’s > 0.05) indicating that on the group level participants were indeed answering according to the visual reference frame.
Figure 4. This figure demonstrates the average distances between the end point (correct answer) and the chosen end positions of the trials in the vision reference frame and the control (A,C) and idiothetic reference frame and control (B,D) in the first block (g = 104%, (A,B)), and the second block (g = 108%, (C,D)). The error distances in the vision paths are slightly higher than the error distances in the control conditions, and the error distances in the idiothetic paths are significantly greater than the control conditions. (n.s denotes not significant, ** denotes p < 0.01, and *** denotes p < 0.001, Wilcoxon signed-rank) Lines connect each participant’s scores between conditions.
These results lead to a D score of 0.65 for the low gain and 0.9 for the higher gain, both sufficiently powered for our sample size which would require a minimum of n = 19.
To assess the contribution of each sensory channel on the navigation task, we used a standard Maximum Likelihood Estimation (MLE):
Let
Under the assumptions of trial independence and a Gaussian execution noise with isotropic variance
And the total log-likelihood over T trials is:
We can now minimize the negative log-likelihood and estimate the cue weights for each participant at both gain levels (Figure 5).
Figure 5. This figure shows the average MLE estimated visual weights for each participant in both gain values, sorted by lower gain weights, with unbounded weights (A), and weights bounded to [-1,1] (B).
From these weights we then estimate the overall contribution of each sensory channel by calculating the mean visual weight (low gain): 0.91 ± 0.39. This translates to a visual contribution of 95.5% ± 78%. Note that these estimated weights are unbounded and the mean is greatly affected by the overshoots, which as can be seen in Figures 5A, are far more prevalent in the visual channel. When estimating the weights in the same MLE model but forcing the weights to be within [-1, 1] (Figures 5B), we instead get the following mean visual weight: 0.327 ± 0.17, which translates to a visual contribution of 66.3% ± 36%.
We then calculated the correlation to see if participants were stable across gain levels and found that they indeed were (R = 0.664, p < 0.001, Figure 6).
Figure 6. This figure shows the correlation between the low gain and higher gain MLE cue weights (Pearson’s R = 0.664, p < 0.001) with 95% confidence interval and a fit line. Each blue dot represents a single participant.
However, while both versions of this model show dominance for vision over idiothetic reference frames matching behavior, given the large variance we turned to look as individual participants. And indeed, looking at the individual participants reveals a very different picture. Nearly a third of the participants showed the opposite pattern! Specifically, 13/23 participants relied primarily on their visual sense consistently across both gain levels, 7/23 relied primarily on their idiothetic senses, and three participants presented different trends across the two gain levels.
We then calculated the correlation between these weights and the objective questionnaire questions to see if we could better characterize the participants with these biases. However, no significant correlation was found between the weights and the subjective spatial skills as assessed by the Santa Barbara Sense Of Direction (SBSOD) score (Hegarty et al., 2002), or any of the demographic (age, gender, experience with VR) questions (all p’s > 0.05).
As it seemed that there was a consistent pattern in which in higher gains we saw a lower weight for that same reference frame in contradiction to our expectation we tested whether this pattern was indeed significant. However, testing the lower gain weights and higher gain weights against each other pairwise by participant revealed that these differences were insignificant (paired t-test, p = 0.09).
4 Discussion and conclusion
4.1 Discussion
In this study, we aimed at testing the way two clashing reference frames, visual and idiothetic, are integrated for path integration during physical movement by utilizing redirected walking with rotational gain. We found that on the group level our participants showed a significant bias towards the visual reference frame, with errors in the idiothetic reference frame being significantly larger than in the control condition suggesting that participants were relying on the visual reference frame and not on the idiothetic one. This matches what one would expect given that RDW algorithms indeed work successfully on a behavioral level.
On the group level, the MLE model’s unbounded weights further supports the hypothesis that the visual input dominates over idiothetic senses in navigation tasks with a 95.5% contribution close to a suppression model. However, when bounding the weights to limit the effects of the overshoots, it appears that our results can suggest a weighted balance between these two sensory channels, still dominated by vision with 66.3% contribution, but supporting an optimal integration of sensory inputs (Oess et al., 2020), rather than sensory suppression.
Our results in the visual reference frame are in line with the findings by Hodgson et al. (2008), and De Back et al. (2025), which demonstrated that RDW does not significantly affect spatial memory compared to walking in VR with no redirection. Note that their results test the visual reference frame with and without redirection but without separatee analysis of the idiothetic reference frame.
However, when examining the individual level a different picture emerged. On the individual level, when examining the cue weights we can see that nearly a third of the participants (7/23) relied primarily on their idiothetic senses, the opposite of the group level results. This suggests that overarching claims of dominant vision or even suppression of the idiothetic channels do not well describe a large portion of the population on an individual level. These results are more in line with multisensory integration models than with multisensory race and suppression models.
The high correlation between the lower and higher gain weights suggests that individuals have a consistent tendency to rely more heavily on one of the sensory channels. This stability on the individual levels further strengthens the results of differences across participants, given that the effects of the idiothetic reference frame were stable across different gains.
The current common view in the redirected walking field is that the visual information dominates over the idiothetic cues - indeed, otherwise this approach would not work which it clearly does. However, it is not clear whether these idiothetic cues are simply suppressed or if they are integrated in a way that leaves traces or biases in our internal cognitive map. Our results show that despite this overall visual dominance there is significant variability hiding in the individual user’s representation of space which is reflected in their spatial memory. They also demonstrate a huge variability between users whose behavior when interacting with the environment might seem the same, and their walk is redirected correctly, but under the hood their multisensory representation vary significantly.
Would this happen outside of Virtual Reality? Such situations are not common outside virtual reality, as a user typically does not need to choose between two incongruent sensory spatial reference frames. We would hypothesize that if such incongruent situations occur then indeed we would find there too that different individuals have different (but stable) biases to the contribution of different sensory channels to their spatial representation of their environment.
Is this environment representative of a typical case in which RDW is utilized? In this research we use RDW as a tool to study human multisensory interactions with implication for RDW, but did not treat optimizing RDW itself in simple or complex environments as our main goal. Accordingly, our primary focus is to better understand the balance between the contribution of the visual and idiothetic sensory channels during navigation to the participants’ spatial performance. As such, we chose to use an edge case in which this could be examined clearly, even though most (but not all!) environments in which RDW is deployed in practice are richer and more complex. We hypothesize that the results from this edge case do not generalize to a specific bias value in such complex environments, but rather that the general mechanism there exists as well - the existence of traces of the idiothetic reference frame in the user’s spatial representations will manifest there as well, albeit possibly to a lesser extent. Thus, although the implications of our findings might be smaller in RDW applications where substantial visual landmarks are present, we believe that the potential individual biases revealed by our results are meaningful there as well. Furthermore, situations in which a RDW application will include areas in the virtual environment which are less densely packed with visual landmarks still exist, and to these our results apply directly.
4.2 Limitations and future work
While we did not find any clear demographic correlations between the participants' sensory biases and our demographic and spatial questions to aid in better characterizing them, it should be noted that our participant size was potentially underpowered for detecting such a correlational effect and increasing the sample size to better understand it would be an important next step. In this context, previous work has linked individuals multisensory perception biases to their sensitivity thresholds for redirected walking (Rothacher et al., 2018), which may potentially show an interesting relationship to spatial memory biases as well. Additionally, Shayman et al. (2024) has demonstrated that may be a potential variable of interest as older participants in their study exhibited wider variance in their results, a variance which they suggest may be due to differences in multisensory integration. While our results are from a narrow age range, we hypothesize based on their work that the relative effect of the reference systems may manifest in age as well, and have a stronger behavioral effect there. Thus, a key next step is understanding what lays behind these individual differences. Such analysis would require a much larger sample of participants such that each demographic sub-group (age, gender, experience with VR) would be powered in and of itself.
A potential confounding variable we must keep in mind, are the potential effects of the outer environment outside the HMD. Though we did not notice any specific sounds, ambient sounds might have affected the navigational decision-making processes of the participants in some of the trials. An additional potential confound is that even though efforts were made to seal the participants view of the physical space beyond the HMD, we can not rule out that movement and perspiration during the experiment could have allowed the participants to catch a glimpse of the floor. While we do not believe these were significant we can not completely rule out potential occurrence and effects from them.
The rotational gain algorithm we used here is only part of the wider story of understanding the effects of redirected walking on path integration. Completing this picture would require adding additional rotational gains such as negative ones (i.e., decelerating gains), in which the participant’s rotations would have been decreased in the virtual environment compared to the physical environment rather than increased. It would also require testing translation gains to achieve full redirected walking and testing the effects of each type of manipulation on the participants' performance together and separately.
An additional limitation is the use of a few subjective experience questions for measurement of nausea and orientation loss symptoms rather than utilizing a standardized tool to assess cybersickness such as the Simulator Sickness Questionnaire (Kennedy et al., 1993).
We chose here to focus our work on the classic triangle completion paradigm due to the wide literature about it. However, there are other path integration tasks deployed in recent years which enable focus on different aspects. Specifically, in triangle completion it is difficult to disentangle different types of error sources (memory, execution), and the integration is discrete between the three legs of the triangle rather than continuous. An alternative approach offering these aspects could be a paradigm such as the loop closure paradigm that was developed by Chrastil et al. (2019) which we plan to expand our work to cover in the future.
Finally, our research focused on behavioral effects, but the underlying physiology is just as fascinating - while recording neural signals from the relevant deep brain areas (e.g., entorhinal cortex, which is widely considered the core region for path integration) during naturalistic tasks involving physical motion is a significant challenge, recent works have suggested that it may be possible indirectly with EEG (Stangl et al., 2023) or directly in patients with invasive implants (Stangl et al., 2023; Maidenbaum et al., 2025).
4.3 Conclusion
Our approach to disentangle the sensory channels via redirected walking helped us provide more insight to path integration in general and particularly on the individual level. While we were indeed able to quantify the contribution of each sensory channel with or without the observed overshoots as planned, our finding of the individual affinity towards a specific sensory channel should inspire future multisensory integration researchers to examine each participant in addition to the group statistics. These subconscious effects should also be noted by VR developers and researchers when they implement personalized experiences and tailor their systems to individual users–just because a user’s path is being properly redirected, it does not mean that their real-world physical path does not have traces in their internal representations of space and residual effects on their spatial memory.
Data availability statement
The original contributions presented in the study are publicly available. This data can be found here: https://osf.io/j23uz/overview?view_only=b4c9e7b19c4e4ae0a42263ef4618a54f
Ethics statement
The studies involving humans were approved by IRB of Ben Gurion University; approval: 2254-1. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
GL: Formal Analysis, Data curation, Visualization, Writing – original draft, Conceptualization, Writing – review and editing, Software, Methodology. SM: Validation, Resources, Project administration, Conceptualization, Supervision, Methodology, Writing – review and editing, Funding acquisition, Formal Analysis, Writing – original draft.
Funding
The author(s) declared that financial support was received for this work and/or its publication. We thank the Israeli Science Foundation (ISF) 1020/23 for our funding.
Acknowledgements
We thank the Israeli Science Foundation (ISF) 1020/23 for our funding, and our participants for participating in our experiment.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2025.1692424/full#supplementary-material
References
Anson, E. R., Ehrenburg, M. R., Wei, E. X., Bakar, D., Simonsick, E., and Agrawal, Y. (2019). Saccular function is associated with both angular and distance errors on the triangle completion test. Clin. Neurophysiol. 130 (11), 2137–2143. doi:10.1016/j.clinph.2019.08.027
Chance, S. S., Gaunet, F., Beall, A. C., and Loomis, J. M. (1998). Locomotion mode affects the updating of objects encountered during travel: the contribution of PRESENCE. Teleoperators and Virtual Environ. 7 (2), 168–178. doi:10.1162/105474698565659
Chrastil, E. R., Nicora, G. L., and Huang, A. (2019). Vision and proprioception make equal contributions to path integration in a novel homing task. Cognition 192 (November), 103998. doi:10.1016/j.cognition.2019.06.010
Chua, S. F. A., Liu, Y., Harris, J. M., and Otto, T. U. (2022). No selective integration required: a race model explains responses to audiovisual motion-in-depth. Cognition 227 (October), 105204. doi:10.1016/j.cognition.2022.105204
Cinelli, T., and Michael, E. (2018). The effect of galvanic vestibular stimulation on path trajectory during a path integration task - tanya karn, Michael E cinelli, 2019. Q. J. Exp. Psychol. 72 (6). doi:10.1177/1747021818798824
De Back, T. T., Tinga, A. M., and Louwerse, M. M. (2025). Natural- and redirected walking in virtual reality: spatial performance and user experience. Multimedia Tools Appl. 84 (24), 28583–28601. doi:10.1007/s11042-024-19879-1
Ernst, M. O., and Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415 (6870), 429–433. doi:10.1038/415429a
Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., and Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence 30 (5), 425–447. doi:10.1016/S0160-2896(02)00116-2
Hirt, C., Isaak, N., Holz, C., and Kunz, A. (2024). Predictive multiuser redirected walking using artificial potential fields. Front. Virtual Real. 5 (August), 1259429. doi:10.3389/frvir.2024.1259429
Hodgson, E., Bachmann, E., and Waller, D. (2008). Redirected walking to explore virtual environments: assessing the potential for spatial interference. ACM Trans. Appl. Percept. 8 (4), 22:–22. doi:10.1145/2043603.2043604
Jeon, S.-B., Jung, J., Park, J., and Lee, I.-K. (2024). F-RDW: redirected walking with forecasting future position. IEEE Trans. Vis. Comput. Graph. 31, 1–15. doi:10.1109/TVCG.2024.3376080
Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3 (3), 203–220. doi:10.1207/s15327108ijap0303_3
Lee, J., Han, S. H., and Choi, S. (2023). Sensory cue integration of visual and vestibular stimuli: a case study for 4D rides. Virtual Real. Godalming, Surrey, Neth. 27 (3), 1671–1683. doi:10.1007/s10055-023-00762-7
Li, Y.-J., Steinicke, F., and Wang, M. (2022). A comprehensive review of redirected walking techniques: taxonomy, methods, and future directions. J. Comput. Sci. Technol. 37 (3), 561–583. doi:10.1007/s11390-022-2266-7
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W., and Fry, P. A. (1993). Nonvisual navigation by blind and sighted: assessment of path integration ability. J. of Exp. Psychol. General (US) 122 (1), 73–91. doi:10.1037/0096-3445.122.1.73
Maidenbaum, S., Kremen, V., Sladky, V., Miller, K., Gompel, J. V., Worrell, G. A., et al. (2025). Improved spatial memory for physical versus virtual navigation. J. Neural Eng. 22 (4), 046014. doi:10.1088/1741-2552/ade6aa
Nescher, T., Huang, Y.-Y., and Kunz, A. (2014). “Planning redirection techniques for optimal free walking experience using model predictive control,” in 2014 IEEE symposium on 3D user interfaces (3DUI), march, 111–118. doi:10.1109/3DUI.2014.6798851
Oess, T., Löhr, M. P. R., Schmid, D., Ernst, M. O., and Neumann, H. (2020). From near-optimal bayesian integration to neuromorphic hardware: a neural network model of multisensory integration. Front. Neurorobotics 14 (May), 29. doi:10.3389/fnbot.2020.00029
Ogawa, K., Fujita, K., Sakamoto, S., Takashima, K., and Kitamura, Y. (2024). Exploring visual-auditory redirected walking using auditory cues in reality. IEEE Trans. Vis. Comput. Graph. 30 (8), 5782–5794. doi:10.1109/TVCG.2023.3309267
Ramy, K., and Herdman, C. M. (2023). Caloric vestibular stimulation induces vestibular circular vection Even with a conflicting visual display presented in a virtual reality headset. I-Perception Lond. United Kingdom 14 (2), 20416695231168093. doi:10.1177/20416695231168093
Rietzler, M., Deubzer, M., Dreja, T., and Rukzio, E. (2020). Telewalk: towards free and endless walking in room-scale virtual reality. Proc. 2020 CHI Conf. Hum. Factors Comput. Syst. (21), 1–9. doi:10.1145/3313831.3376821
Rothacher, Y., Nguyen, A., Lenggenhager, B., Kunz, A., and Brugger, P. (2018). Visual capture of gait during redirected walking. Sci. Rep. 8 (1), 17974. doi:10.1038/s41598-018-36035-6
Shayman, C. S., McCracken, M. K., Finney, H. C., Katsanevas, A. M., Fino, P. C., Stefanucci, J. K., et al. (2024). Effects of older age on visual and self-motion sensory cue integration in navigation. Exp. Brain Res. 242 (6), 1277–1290. doi:10.1007/s00221-024-06818-7
Stangl, M., Maoz, S. L., and Suthana, N. (2023). Mobile cognition: imaging the human brain in the ‘real world. Nat. Rev. Neurosci. Lond. United States 24 (6), 347–362. doi:10.1038/s41583-023-00692-y
Steinicke, F., Bruder, G., Jerald, J., Frenz, H., and Lappe, M. (2010). Estimation of detection thresholds for redirected walking techniques. IEEE Trans. Vis. Comput. Graph. 16 (1), 17–27. doi:10.1109/TVCG.2009.62
Strauss, R. R., Ramanujan, R., Becker, A., and Peck, T. C. (2020). A steering algorithm for redirected walking using reinforcement learning. IEEE Trans. Vis. Comput. Graph. 26 (5), 1955–1963. doi:10.1109/TVCG.2020.2973060
Suma, E. A., Krum, D. M., Finkelstein, S., and Bolas, M. (2011). “Effects of redirection on spatial orientation in real and virtual environments,” in 2011 IEEE symposium on 3D user interfaces (3DUI), march, 35–38. doi:10.1109/3DUI.2011.5759214
Unity Technologies (2022). Unity. V. 2022.3.4f1. Unity Technol. Released. Available online at: https://unity.com/.
Weller, R., Benjamin, B., and Gabriel, Z. (2022). Redirected walking in virtual reality with auditory step feedback. Vis. Comput. 38 (9), 3475–3486. doi:10.1007/s00371-022-02565-4
Keywords: human behavior, multisensory integration, multisensory clashes, path integration, redirected walking, virtual reality
Citation: Lavy G and Maidenbaum S (2026) The balance between clashing visual and idiothetic cues during path integration with redirected walking. Front. Virtual Real. 6:1692424. doi: 10.3389/frvir.2025.1692424
Received: 25 August 2025; Accepted: 16 December 2025;
Published: 08 January 2026.
Edited by:
Stephen Palmisano, University of Wollongong, AustraliaReviewed by:
Corey Scott Shayman, The University of Utah, United StatesChang-Gyu Lee, Korea Institute of Industrial Technology, Republic of Korea
Copyright © 2026 Lavy and Maidenbaum. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Guy Lavy, Z3V5bGF2QHBvc3QuYmd1LmFjLmls; Shachar Maidenbaum, bXNoYWNoYXJAYmd1LmFjLmls