Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Virtual Real., 08 January 2026

Sec. Virtual Reality and Human Behaviour

Volume 6 - 2025 | https://doi.org/10.3389/frvir.2025.1699143

Assessing and enhancing the ecological validity of human locomotion in virtual reality

  • 1Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, Jena, Germany
  • 2Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Osaka, Japan
  • 3Department of Health Promotion Science, Tokyo Metropolitan University, Tokyo, Japan

The increasing use of virtual reality (VR) technologies in behavioral research raises a critical question: to what extent do behaviors in VR mirror those in the real world? VR offers strong experimental control and reproducibility, yet its ecological validity remains debated. This review examines two fundamental human locomotion behaviors, overground walking and collision avoidance in immersive VR, and evaluates how accurately these behaviors are reproduced. Practical strategies for enhancing ecological validity that do not require specialized hardware development are also proposed. Evidence indicates that fundamental behavioral patterns are largely preserved, supporting the use of VR as a valuable tool for investigating human locomotion. However, participants consistently adopt more conservative behavioral strategies in VR, such as slower walking speeds, increased interpersonal distances, and earlier avoidance initiation, largely due to egocentric distance underestimation and head-mounted display constraints such as restricted field of view, limited resolution, and latency. To address these limitations, we suggest design approaches supported by prior evidence, including enhanced ground textures, full-body avatars, and optimal calibration procedures, all of which can enhance behavioral realism in VR. This review provides a comprehensive account of the ecological validity of VR locomotion and offers actionable strategies to enhance behavioral realism, providing valuable guidance for future studies using VR.

Introduction

Over the past decade, virtual reality (VR) has gained increasing attention as a powerful methodological tool across diverse research fields, including experimental psychology (Brookes et al., 2020; Cipresso et al., 2018; de la Rosa and Breidt, 2018; Pan and Hamilton, 2018; Wrzus et al., 2024). This growing interest has been driven primarily by advances in both hardware and software, particularly in head-mounted displays (HMDs), which offer a high level of sensory immersion and enable interactive experiences that are difficult to replicate in real-world settings. By combining high immersion with precise experimental control, modern HMDs facilitate the investigation of complex human behaviors in both controlled and naturalistic environments (de la Rosa and Breidt, 2018; Thurley, 2022).

However, the scientific and clinical value of VR ultimately depends on its ecological validity (Schmuckler, 2001), which is defined as the extent to which behavior observed in VR corresponds to that in real-world settings, and whether findings can be generalized beyond the virtual context (Bohil et al., 2011; Parsons, 2015; Personeni and Savescu, 2023). Insufficient ecological validity compromises generalizability and may lead to inaccurate conclusions about real-world behavior. Recent domain-specific reviews have emphasized that, despite the strong experimental control afforded by VR, many clinical applications lack convincing evidence for successful transfer to real-world contexts (Faria et al., 2023; Mani Bharathi et al., 2024). Thus, strengthening ecological validity is not merely a theoretical concern but a necessary step toward real-world applicability.

Building on these concerns, particular attention has been paid to walking and collision avoidance tasks in VR, which are fundamental aspects of everyday locomotion. Since these behaviors involve complex motor coordination that integrates visual, vestibular, and proprioceptive inputs, the faithful reproduction of such tasks in VR is considered challenging. Our research team developed an applied walking task in VR and examined its validity, finding that task performance did not differ significantly between real and virtual conditions (Waki et al., 2025). Nonetheless, we also observed greater neck flexion in VR, presumably to compensate for the limited field of view (FOV) in HMDs, suggesting that fully replicating natural behavior remains challenging. Generally, in research domains focusing on human movement and clinical rehabilitation, even subtle movement discrepancies can yield clinically meaningful differences. For example, based on a systematic review of clinical populations, Bohannon and Glenney (2014) reported that minimal clinically important differences for changes in comfortable gait speed ranged from 0.08 to 0.26 m/s (Bohannon and Glenney, 2014). Thus, even subtle mismatches between VR and real-world movement can restrict generalization, underscoring the need to maintain ecological validity.

Although recent reviews have advanced our understanding of VR validity (Hepperle and Wölfel, 2023; Personeni and Savescu, 2023), their scope has primarily emphasized interaction fidelity, user experience, and task performance, especially in occupational/workstation contexts (Personeni and Savescu, 2023) and in cross-media validity comparisons across HMD, screen-based, and real-world settings in human behavior research (Hepperle and Wölfel, 2023). Evidence for dynamic locomotor behaviors such as overground walking and obstacle or collision avoidance, which require multimodal sensory input, remains relatively sparse and fragmented in these reviews. For example, it is unclear whether the VR-specific behaviors reported in prior studies arise from VR-specific perceptual biases or reflect early-stage adaptation to VR that can be disregarded. In addition, strategies to correct these VR-specific behaviors and align them with real-world performance have yet to be firmly established. Thus, whether VR research genuinely captures real-world behaviors and whether VR is indeed a useful tool remain central issues.

In this review, we make three contributions that will be valuable for future VR research. First, we review the fragmented literature on overground walking and collision avoidance, demonstrating that observed differences reflect both the adverse effects of VR-specific perceptual biases—especially egocentric distance underestimation—and adaptive behaviors in response to novel, idiosyncratic VR environments. Importantly, we argue that VR-specific perceptual biases are targets for remediation. Second, we identify egocentric distance underestimation in VR as a key proximal mechanism that substantially causes these conservative behaviors, clarifying a critical factor for improving VR validity. Third, we propose a practical framework for mitigating conservative behaviors and enhancing validity that does not rely on advances in hardware development. Taken together, these contributions provide a clear resolution to long-standing debates over the validity of VR locomotion and offer actionable guidance that can accelerate the use of VR as a reliable research tool.

The first part of this manuscript summarizes peer-reviewed findings, identifying differences between behaviors in VR and the real world, and outlining key contributing factors. Given that screen-based VR systems do not involve physical spatial movement, this review focuses on studies using HMDs that allow overground locomotion. In addition, although VR applications are expanding to include diverse populations such as older adults and children, current evidence for these groups is limited. Therefore, the review concentrates on studies involving healthy adult participants. To improve the validity of VR, the second part focuses on design-based solutions feasible for researchers, including task structure, stimulus presentation, and experimental protocols, rather than relying solely on advances in hardware or software. Collectively, these sections provide a conceptual and methodological framework for designing VR experiments that yield more reliable, generalizable, and behaviorally realistic outcomes.

Literature search strategy

We searched for relevant articles in major academic databases (PubMed, Web of Science, IEEE Xplore) that regularly index studies related to the aims of this review (last searched 14 October 2025). The search was conducted using the following search string: ((“virtual reality” OR “head-mounted display” OR “VR” OR “HMD”) AND (“walking” OR “locomotion” OR “gait” OR “steering” OR “collision avoidance” OR “navigation” OR “circumvent” OR “obstacle”) AND (“ecological validity” OR “realism” OR “fidelity” OR “real environment” OR “real-world” OR “physical environment”) AND (“behavior” OR “movement”)). We included studies that (1) primarily examined human movement (e.g., kinematics and movement strategies) during walking and during avoidance of avatars or static objects, and (2) compared behavior in real environments with HMD-based immersive VR to evaluate validity. We excluded studies that (a) used screen-based or desktop VR or AR, or (b) were editorials, commentaries, conference abstracts, letters, reviews, or case reports.

Ecological validity in locomotion tasks

Biomechanical validity of walking in virtual reality

Overall, basic biomechanical features have generally been found to be valid. For example, Horsak et al. (2021) compared overground walking in a room-scale VR environment that closely replicated a real-world laboratory with walking in the actual laboratory (Horsak et al., 2021). In their study, participants walked barefoot along a 7-m walkway, with their movements captured using a motion capture system and a force plate to measure kinematics and kinetics, including joint angles, joint moments, and joint power. In the VR, participants wore an HMD and viewed only their tracked virtual feet. Only minor kinematic and kinetic differences were observed between the two conditions. Specifically, ankle plantarflexion at push-off was reduced by approximately 3°, the sole angle by about 4°, and ankle plantarflexion power generation decreased by approximately 0.4 W/kg in VR. However, major joint kinematics and kinetics, such as those at the hip and knee, showed no clinically meaningful changes. In addition, no significant changes were observed in neck posture, a typical postural adjustment when wearing an HMD.

Palmisano et al. (2022) also demonstrated that VR can serve as a realistic and standardized tool for walking assessment (Palmisano et al., 2022). In their study, participants walked along a 10 m path while wearing a wireless HMD (VIVE Pro, HTC Corporation, New Taipei City, Taiwan). The VR environment was a highly accurate digital replica of the physical laboratory, reproduced at a 1:1 scale. Detailed features of the virtual scene, including lighting configurations and object shadows, were carefully replicated. The results showed no significant differences in fundamental spatiotemporal gait parameters, including stride length, stride duration, walking speed, and trunk inclination. These findings indicate that there was almost no distortion in natural locomotor patterns in the immersive VR setup.

Together, these studies indicate that walking in VR closely resembles walking in the real world. The only consistent difference was a small reduction in ankle plantarflexion during push-off, but no significant changes in balance or overall walking patterns have been observed. These findings support the use of immersive VR as a valid tool for studying walking behavior. A potential limitation has been the small sample sizes and short exposure durations used in these studies. The limited number of participants (n = 12–21) may reduce statistical power and increase the risk of overlooking subtle but meaningful effects. Additionally, short exposure durations raise questions about whether the observed walking patterns reflect stable behavior or merely initial responses to a novel environment. Without repeated or prolonged exposure, it remains unclear how factors such as adaptation, fatigue, or simulator sickness may influence walking in VR. Future studies should address these challenges to better establish the ecological validity of VR.

Conservative walking in VR

Despite these biomechanical similarities, many studies report that walking in VR tends to become more conservative. For instance, Janeh et al. (2017), Janeh et al. (2018) conducted an experiment in which participants walked along a 6 m walkway in both real and virtual environments. They found that participants walked more slowly (approximately 6% reduction in velocity), took shorter steps, increased their step count, widened their base of support, and extended their double-support phase in VR (Janeh et al., 2017). These conservative tendencies were corroborated in a follow-up study that also compared younger and older adults, in which even younger adults again showed slower speed (approximately 6.5% reduction in velocity), shorter steps, and wider step width (Janeh et al., 2018). To examine whether these conservative shifts could be mitigated, they next manipulated the level of visual translation gain—defined as the ratio of the visual displacement in the virtual environment to the user’s actual physical displacement. They found that manipulating the level of visual translation gain tended to exacerbate this conservative behavior rather than reducing it. While previous studies have suggested that slightly increasing visual motion in VR may enhance the subjective naturalness of locomotion (Steinicke et al., 2010), the results of Janeh’s study contradict that assumption. Instead, their findings suggest that sensorimotor consistency, defined as the alignment between physical movement and visual feedback, may be a key determinant for maintaining natural and stable gait patterns in VR.

This conservative walking strategy appears to be influenced by the visual characteristics of the walking environment. Mason et al. (2023) compared participants’ gait in both real and virtual environments along corridors of varying widths (Mason et al., 2023). They found that walking in VR tended to become more conservative, with slower steps, longer periods of double support, and increased variability across all measured parameters. Interestingly, in a wide corridor, cadence and walking speed were significantly lower in VR compared to the real environment. The authors suggested that differences in optic flow cues associated with corridor width may alter the perception of self-motion in VR, potentially leading to intentional deceleration. These results highlight the importance of carefully controlling visual information, such as environmental width and optic flow cues, when designing virtual environments for locomotion.

Further evidence of this conservative strategy was reported by Horsak et al. (2023), who focused on dynamic balance and postural control during overground walking in a room-scale VR setting designed to replicate a real-world laboratory (Horsak et al., 2023). Their results showed that participants in VR exhibited a significantly reduced anteroposterior margin of stability, which the authors interpreted as a cautious walking strategy because a slower walking speed brings the center of mass (COM) closer to the base of support. Although mediolateral stability remained unchanged, VR walking was associated with greater mediolateral trunk velocity variability, suggesting reduced lateral postural control. To compensate, participants showed a stronger coupling between COM position and the width of the next step, suggesting that they adjusted foot placement more actively to maintain balance.

The conservative walking pattern may be adapted through relatively prolonged walking. Martelli et al. (2019) investigated gait changes during 6-min overground walking with a VR headset, comparing virtual-environment walking against real-environment walking (baseline) by analyzing minute-by-minute adaptation (Martelli et al., 2019). Over the 6-min exposure, many gait parameters (e.g., stride length, stride time, and stride-time variability) showed main effects of time and condition × time interactions, indicating rapid visuomotor adaptation. However, stride-width variability remained elevated, suggesting a persistent conservative strategy for mediolateral balance control. Extending this evidence, Padilla et al. (2023) analyzed 30 min of VR walking and reported that the initial 5 min exhibited a conservative gait pattern—reduced stride length, increased stride width, and prolonged double-support time—whereas by approximately 10–15 min the group-level differences in these parameters were no longer statistically significant compared with real-world walking (Padilla et al., 2023). Crucially, individual-level recovery was incomplete: only 1 of 19 participants returned to baseline across all three indices, indicating that many users do not fully abandon conservative gait strategies even with prolonged exposure.

Taken together, these studies indicate that while immersive VR largely preserves fundamental walking patterns, a conservative strategy is consistently reported. Crucially, this conservative adjustment reflects both (i) an adaptive response to the novelty and uncertainty of VR (Janeh et al., 2018; Janeh et al., 2017; Martelli et al., 2019; Padilla et al., 2023) and (ii) VR-specific biases in perception and sensory integration, such as restricted FOV and system latency, which reduce the reliability of optic flow (Horsak et al., 2023; Horsak et al., 2021; Martelli et al., 2019; Mason et al., 2023). Although conservative responses to novelty are rational and entirely natural, the effects driven by VR-specific perceptual biases should be mitigated. To improve the ecological validity of VR-based locomotion research, addressing such VR-induced conservative behavior will be essential. The following section outlines the key factors contributing to these conservative strategies.

Ecological validity of obstacle and interpersonal avoidance

Behavioral validity of obstacle avoidance in VR

Next, we focus on collision avoidance behavior involving obstacles and virtual humans (avatars) in VR, examining the extent to which such behavior replicates real-world patterns and the factors that influence it. Generally, obstacle avoidance in VR tends to preserve fundamental movement patterns, supporting its use as a valuable tool for investigating collision avoidance behavior in a safe environment. However, most studies report that participants adopt more conservative strategies in VR, such as maintaining greater clearance distances and walking at slower speeds.

For example, Fink et al. (2007) examined whether the steering dynamics model proposed by Fajen and Warren (2003), which describes how individuals dynamically adjust their heading to avoid obstacles and reach goals (Fajen and Warren, 2003), remains valid in VR (Fink et al., 2007). In their study, participants walked from a starting point toward a stationary goal while avoiding a single stationary obstacle placed along the path. The same object layouts were used in both real and virtual environments. The overall shape and curvature of walking trajectories were similar across both environments, and locomotor paths in both settings were accurately described by the same set of parameters within the steering dynamics model. These results indicate that the underlying control strategies were consistent across environments. Nevertheless, participants in VR walked more slowly and maintained greater clearance from obstacles.

Recent improvements in HMDs have the potential to reduce conservative behavioral tendencies. Agethen et al. (2018) compared obstacle avoidance behaviors in real and virtual environments that closely replicated a real-world physical setting (Agethen et al., 2018). Participants maintained slightly greater clearance in VR than in the real world, but these differences were not statistically significant, suggesting that excessively conservative avoidance behavior was not evident. Compared with the findings of Fink et al. (2007), the authors suggested that improvements in HMD technology, such as a wider FOV and reduced latency, may have contributed to improved ecological validity in VR. This pattern supports the view that continued advances in VR hardware can enhance the ecological validity of VR-based locomotion research.

Taken together, the evidence indicates that key features, such as path curvature and sequential segmental rotations, are preserved in VR, closely mirroring real-world obstacle avoidance patterns. However, conservative behaviors, including increased clearance margins and reduced approach speeds, are also observed. Such conservative strategies have been interpreted as adaptive responses to VR-specific perceptual uncertainty (e.g., weaker distance cues, field-of-view limitations, and latency) (Agethen et al., 2018; Fink et al., 2007). Recent advances in wide-field-of-view and low-latency HMDs may reduce these differences to negligible levels. These findings underscore the utility of immersive VR as a research tool, supporting its use as a reliable and ecologically valid platform for studying collision avoidance behaviors.

Behavioral validity of interpersonal avoidance in VR

Interpersonal collision avoidance involves complex, dynamic processes that extend beyond simple obstacle navigation, requiring continuous adjustments in response to others’ social and physical characteristics, such as body size, sex, and movement trajectory. Despite this complexity, previous studies have demonstrated that basic behavioral patterns are preserved in VR, supporting its validity and effectiveness as a tool for investigating interpersonal collision avoidance behaviors. Nevertheless, conservative behaviors within VR environments have been consistently reported in many previous studies.

For example, Bühler and Lamontagne (2019) asked participants to walk toward a visible target while avoiding a static pedestrian, either real or virtual, positioned at a distance of 3.0 or 3.5 m ahead, and compared behavior in the real environment and a VR setup that replicated the same spatial layout (Bühler and Lamontagne, 2019). They found that while participants in VR maintained larger clearances and walked more slowly, key behavioral features, such as the overall right-side detour bias and trial-to-trial variability in most measurement variables (lateral deviation, clearance, gait speed, and avoidance onset distance) were largely preserved across environments.

Such findings are further supported by studies using a dynamic virtual avatar from the same research group (Bühler and Lamontagne, 2018; 2022). These studies examined how participants coordinated their walking trajectories and reoriented their body segments when avoiding pedestrians approaching from different directions in both real and virtual environments. Participants in VR walked more slowly and maintained larger clearance distances, yet the temporal sequence of postural reorientation (e.g., the head, trunk, and pelvis turning in sequence) was largely preserved (Bühler and Lamontagne, 2022). In addition, the selected avoidance strategy, such as passing either in front of or behind the oncoming pedestrian, had systematic effects on locomotion. Compared to passing behind, passing in front was associated with faster walking, smaller changes in direction, and more delayed body segment reorientation. These strategy-dependent effects emerged in both the real and virtual environments.

This conservative avoidance behavior is consistent with other studies. Palmisano et al. (2022) implemented an immersive VR paradigm in which a virtual agent (VA) dynamically crossed the participant’s path approximately 3 m ahead, within a highly realistic digital replica of a laboratory environment. The presence of the VA elicited conservative adaptations, including shortened stride length, reduced walking speed, prolonged stride duration, increased mediolateral deviation, and greater trunk inclination (Palmisano et al., 2022).

In summary, previous studies have consistently shown that interpersonal collision avoidance behaviors in VR closely resemble those in real-world settings. Core features, such as the temporal structure of avoidance (observation, reaction, and adjustment), sequential head-to-pelvis reorientation, and strategy-dependent changes in speed and path deviation, are reproduced in VR. Furthermore, key measures including walking speed, trajectory deviation, and clearance often show no significant trial-to-trial differences, highlighting the value of VR for within-subject comparisons. Nonetheless, users tend to adopt more conservative behaviors in VR, including earlier avoidance initiation, slower walking speeds, and increased interpersonal distance. These conservative tendencies are considered to reflect both VR-specific perceptual biases (e.g., egocentric distance underestimation) and adaptive safety-related modulation under uncertainty (Bühler and Lamontagne, 2018; 2019; 2022; Palmisano et al., 2022). To further enhance the ecological validity of VR, it will be important to understand better the factors underlying these conservative behaviors and to develop strategies to mitigate them.

Factors contributing to conservative strategies in VR

We emphasize that not all conservative behaviors described in this review warrant mitigation. In inherently risky tasks—for example, walking at virtual height, traversing a narrow walkway, or avoiding close interpersonal contact—conservative behavior represents an appropriate and functionally adaptive response, indicating that the virtual environment affords relevant action possibilities. This view is consistent with Slater’s framework: when Place Illusion and Plausibility are sufficiently established, people tend to respond to virtual events as if they were real (Slater, 2009; Slater et al., 2022). By contrast, we consider that conservative behaviors arising from VR-specific factors—such as distance underestimation, restricted FOV, latency, and user unfamiliarity—should be treated as artifacts and prioritized for mitigation.

Among the functional factors related to the hardware device, limitations such as a restricted FOV (Bühler and Lamontagne, 2019; 2022; Fink et al., 2007; Martelli et al., 2019; Mason et al., 2023; Padilla et al., 2023; Palmisano et al., 2022), device weight (Fink et al., 2007; Janeh et al., 2017; Mason et al., 2023), and latency (Agethen et al., 2018; Horsak et al., 2023; Horsak et al., 2021) have been identified as contributors to conservative behavior. A restricted FOV reduces peripheral visual information, while latency causes mismatches between updated visual input and proprioceptive feedback from the body. Consequently, users may rely more heavily on proprioceptive and vestibular inputs, resulting in more cautious and slower walking patterns. Even relatively small delays can destabilize perceived scene stability and alter postural responses. Because delayed visual feedback does not match what the body feels and expects, the world seems less stable. In addition, the limited resolution of current HMDs can degrade the quality of visual cues in VR. For example, the ground plane is a critical source of information for distance perception. If its texture is coarse or visually impoverished, its informational value is reduced. This reduction can further impair spatial perception and promote conservative strategies.

Unfamiliarity with VR tasks, fear of potential collisions, or uncertainty about the physical environment is thought to promote more conservative behavior, such as larger safety margins and reduced walking speed (Bühler and Lamontagne, 2019; Janeh et al., 2018; Janeh et al., 2017; Martelli et al., 2019; Mason et al., 2023; Padilla et al., 2023). This tendency may be intensified under conditions involving greater cognitive load (e.g., navigating unfamiliar layouts and processing complex visual scenes) or sensory uncertainty (e.g., degraded visual resolution, limited peripheral field, and latency). In such situations, individuals may adopt conservative strategies not solely due to physical constraints but as a form of risk management aimed at preserving postural stability, avoiding collisions, and maintaining a sense of control within an environment that is perceived as unpredictable or less reliable.

Participants without prior VR experience can behave more conservatively. Padilla et al. (2023) showed that the conservative gait observed during the first 5 minutes of immersion is significantly reduced after 10–15 min. Martelli et al. (2019) likewise reported that 6 minutes of VR exposure can alter gait patterns. Moreover, beyond behavior alone, Keil et al. (2021a) demonstrated that a brief distance-estimation training in VR improves distance-perception accuracy, indicating that VR experience can also induce changes in perceptual processes (Keil et al., 2021a). However, such changes can be regarded as a natural adaptation to a novel environment, and it is therefore essential to include an appropriate adaptation or familiarization phase.

Deficits in haptic input and the absence of a self-avatar are associated with more conservative behavior in VR, particularly in collision-avoidance contexts. Bühler and Lamontagne note that although immersive HMD-based VR reproduces many sensory cues, it does not fully replicate real-world sensory stimulation, and these limitations may extend to avoidance behavior (Bühler and Lamontagne, 2019). In addition, Fink et al. (2007) argued that when users cannot see a self-representation and the FOV is restricted, their capacity to monitor limb and torso position relative to the environment visually is impaired. This increased uncertainty about an obstacle’s egocentric location, in turn, leads to larger clearances in VR (Fink et al., 2007).

In many of the prior studies reviewed here, underestimation of distance has likewise been regarded as problematic (Bühler and Lamontagne, 2018; 2019; 2022; Fink et al., 2007; Janeh et al., 2018; Janeh et al., 2017; Martelli et al., 2019; Mason et al., 2023). In general, a substantial body of research has demonstrated that users typically perceive objects and surfaces in VR as being closer than they actually are in reality (Combe et al., 2023; Creem-Regehr et al., 2023; El Jamiy and Marsh, 2019; Kelly, 2023; Renner et al., 2013). For example, distant walls or obstacles in VR may appear nearer than their actual positions, leading users to anticipate potential collisions earlier than they would in the real world. In response to this perceptual uncertainty, pedestrians tend to adopt more conservative locomotor strategies, such as reducing walking speed and increasing double-support duration, to enhance postural stability.

Egocentric distance underestimation: a perceptual root of conservative strategies in immersive VR

As summarized in the previous sections, immersive VR reliably preserves the kinematic patterns of locomotion and the strategy of avoidance behavior, yet it simultaneously elicits more conservative movement patterns. Conservative responses associated with adaptation to the novelty of VR represent natural human behavior and may be regarded as acceptable. In contrast, conservative behaviors driven by inaccurate distance perception—stemming from device-related limitations (e.g., restricted FOV, reduced resolution, latency) and impoverished visual environments—should be considered artifacts and be targeted for mitigation.

In principle, human depth and distance perception can be modeled as a reliability-weighted integration of multiple sensory cues (Ernst and Bülthoff, 2004; Landy et al., 1995). At near distances (approximately 2 m), binocular disparity and vergence are the dominant sources of information, whereas at intermediate distances (from several to tens of meters), motion parallax and ground-based cues—such as ground-texture gradients, the horizon, occlusion, and cast shadows—provide the primary information (Cutting and Vishton, 1995; Ono et al., 1992; Rogers, 2009; Tresilian et al., 1999; Wu et al., 2008; Yoonessi and Baker, 2013). In HMD-based VR, such distance cues are often degraded or mutually inconsistent, which impairs their reliability. Even with modern HMDs, the underestimation of distance in VR has been reported in numerous previous studies (Combe et al., 2023; Creem-Regehr et al., 2023; El Jamiy and Marsh, 2019; Feldstein et al., 2020; Guzsvinecz et al., 2023; Kelly, 2023; Kelly et al., 2022; Korshunova-Fucci et al., 2024). In contrast, Guzsvinecz et al. (2023) reported that at very near viewing distances (e.g., 25 cm), estimates can be biased toward overestimation, indicating that perceived distance may vary systematically as a function of target range (Guzsvinecz et al., 2023). Moreover, Maruhn et al. (2019) showed that the magnitude of distance underestimation varies with the measurement method: verbal estimates and visually guided walking yielded the smallest errors, whereas other visually guided actions showed larger underestimation (Maruhn et al., 2019). Although viewing distance and measurement method can modulate the magnitude and even the direction of the bias, distance perception in VR is, overall, biased toward underestimation. For instance, Kelly (2023) reviewed 61 studies and reported that perceived egocentric distances in VR averaged approximately 73.5% of the target distances. To be specific, when distances to targets or walls are underestimated, the perceived time-to-contact (TTC) is underestimated. In turn, this promotes conservative behavior—slower speeds, larger lateral clearances, and earlier initiation of avoidance. This account is consistent with Gibson’s ecological theory of perception (Gibson, 1979), in which actions are perceived as affordances defined by the relation between the body and the environment. Accordingly, reducing VR-specific perceptual biases—particularly distance underestimation—has the potential to attenuate these conservative movement tendencies.

Hardware advances

Continued advances in hardware, such as wider FOV and reduced latency, can help mitigate conservative behavior. Indeed, although distance underestimation has not been eliminated even with modern headsets, studies report that a wider FOV and higher resolution mitigate this bias (Kelly, 2023). For example, Masnadi et al. (2022) used the Pimax 5K Plus—a headset with high-quality visuals (maximum FOV 165°, 2,560 × 1,440 per eye at 120 Hz; 470 g)—and found that although egocentric distances were still underestimated at the maximal FOV setting, the magnitude of underestimation was significantly reduced (Masnadi et al., 2022). Moreover, immersive HMD-based VR systems that allow users to move untethered or within room-scale tracked spaces have been used to increase freedom of movement and elicit more ecologically valid locomotor and avoidance behaviors (Sporrer et al., 2023; Waller et al., 2007). However, even in recent studies using contemporary headsets—such as the HTC VIVE Pro—participants consistently exhibit conservative behavior in VR environments (Horsak et al., 2023; Horsak et al., 2021; Mason et al., 2023; Padilla et al., 2023). That is, even when recent advances in hardware are taken into account, behavior changes driven by VR-specific perceptual biases appear to be robust. Although further improvements in device performance (e.g., true 4K resolution per eye and near-zero latency) are anticipated, relying solely on technological improvements is neither sufficient nor immediately feasible, particularly since researchers typically have little control over the specifications of commercial HMD systems.

Practical frameworks to enhance ecological validity

Accordingly, optimizing experimental design and visual presentation represents the most practical and immediate strategy for enhancing ecological validity. To translate these insights into concrete design guidance, we propose three evidence-based frameworks to make behavior in VR more comparable to that in a real environment: (1) incorporating rich depth cues, (2) utilizing VR self-avatars, and (3) conducting optimal calibration procedures. These approaches are expected to be particularly effective in mitigating systematic distance underestimation, one of the most prominent VR-specific perceptual biases. In the following sections, we summarize supporting evidence, specify design recommendations and boundary conditions, and provide actionable guidelines for building VR environments that elicit more naturalistic and generalizable locomotor behavior.

Designing rich visual cues

To address such egocentric distance underestimation, many studies have used photorealistic VR, but results are inconsistent, and the underestimation often remains (Creem-Regehr et al., 2023). Accurately replicating the visual appearance of real-world settings alone is insufficient to ensure precise distance perception. Instead, the presence and perceptual clarity of depth cues appear to play a more fundamental role in supporting accurate distance perception in VR.

Rich ground-plane texture gradients are a reliable static visual cue for distance perception in both real and virtual environments (Dussutour et al., 2004; Zhang et al., 2021). For example, applying ground-plane textures such as tiled surfaces or grass patterns that exhibit scale compression with increasing distance can enhance the perception of linear perspective, thereby supporting more accurate distance judgments. This type of visual arrangement conveys a strong impression of depth, as the surface appears to recede into the distance. A common example is a tiled floor or cobblestone path, where individual elements become progressively smaller and more closely spaced with increasing distance from the observer. This approach is based on Gibson’s (1979) theoretical framework (Gibson, 1979), particularly the concepts of visual invariants and texture gradients. This perspective is consistent with recent empirical findings showing that high-resolution texture presentation on virtual ground surfaces can significantly improve the accuracy of distance perception in VR (Hornsey and Hibbard, 2021). Hornsey and Hibbard (2021) evaluated four visual environments: sparse (minimal cues), perspective (walls and ceiling introducing linear perspective), textured (additional brick patterns on walls and floor), and cluttered (with added contextual objects). Although participants generally underestimated distances under all conditions, both bias and variability decreased as more visual cues were added. The most pronounced improvement in distance perception was achieved through the inclusion of linear perspective cues, with texture gradients and environmental clutter providing additional gains. These results emphasize the importance of realistic, scale-consistent ground textures for enhancing the distance perception in VR.

Cast shadows also serve as a reliable depth cue by clarifying the spatial relationship between objects and the surface. By visually anchoring objects to the ground plane, shadows reduce ambiguity about object location in three-dimensional space, thereby improving depth perception accuracy. When shadows are accurately rendered, they enhance the perception that an object is placed on a surface rather than floating. A recent VR distance-matching study by Hornsey and Hibbard (2024) investigated how the spatial configuration of shadows influences distance perception in VR (Hornsey and Hibbard, 2024). When the light source was positioned directly above the object, aligning its cast shadow with the reference surface, distance underestimation was reduced, producing the highest distance matching accuracy. These results are consistent with Gao et al. (2020), who reported that adding a virtual object’s shadow improved distance matching accuracy and that shadow realism (e.g., simple drop vs. more realistic forms) did not significantly affect performance, suggesting that even low-cost shadow approximations can be effective (Gao et al., 2020). Furthermore, in the VR experiment by Palmisano et al. (2022), which demonstrated close agreement walking patterns between VR and in the real world, cast shadows were also accurately reproduced. It is plausible that the high ecological validity in their study was partly due to the presence of such rich and realistic depth information. Taken together, these findings underscore the value of incorporating spatially congruent cast shadows as a practical approach to enhance perceptual fidelity and improve the ecological validity of VR.

In addition, distance perception in VR can be improved by incorporating multiple objects. Humans often estimate the distance to an object based on its size on the retina (Sato et al., 2023; Sato and Higuchi, 2025; Sousa et al., 2011); objects that appear larger are perceived as closer, and those that appear smaller are perceived as farther away. The presence of multiple objects provides additional depth cues, serving as reference points that help observers estimate distances and understand spatial relationships more accurately. In contrast, when environmental cues are sparse, individuals must rely primarily on the size of the target object on the retina, leading to less accurate distance perception. Borodaeva et al. found that nearby boundaries or landmarks significantly reduced the underestimation of distance in VR (Borodaeva et al., 2023). Notably, the higher accuracy was achieved when both walls and landmarks were present, indicating that integrating multiple spatial cues yields more robust and reliable distance estimates. One important factor is that these objects should be familiar to the observer, as familiar objects provide a cognitive reference for estimating size and distance, enabling the visual system to infer spatial relationships more accurately (Rzepka et al., 2023). Moreover, the processing of visual information—and the magnitude of perceptual error—differs between near (action space) and far distances. For example, at very long ranges (≥100 m), underestimation becomes especially pronounced: in walking-based judgments, observers tend to stop well short of the true target distance (Hecht et al., 2018). Accordingly, distance perception is context-dependent, and the visual cues to be strengthened should be tailored to the relevant target range (Cutting and Vishton, 1995). When judgments of near targets are critical—as in collision avoidance—priority should be given to enhancing and appropriately weighting binocular cues, motion parallax, and ground- and horizon-based information.

However, the amount and consistency of visual information may also be an important factor, rather than merely increasing visual information (Nguyen et al., 2011). Hmaiti et al. (2024) investigated the effect of visual complexity on distance perception by systematically manipulating environmental complexity across three levels: low, medium, and high (Hmaiti et al., 2024). The results indicated a tendency for distance perception to be most accurate under the high-complexity scenario, whereas the most pronounced underestimation occurred under the medium-complexity scenario, which differed significantly from the high-complexity scenario and showed a trend-level difference relative to the low-complexity scenario (medium > high, p < 0.001; medium > low, p = 0.063). In free-response comments (thematic analysis), many participants reported that the high-complexity environments felt more realistic and unambiguous, containing more familiar objects that facilitated distance judgments. In contrast, the authors interpreted the medium-complexity condition as increasing the quantity of cues without ensuring their consistency, thereby introducing noise that hindered cue integration and promoted shorter distance estimates. Taken together, these findings highlight the importance of cue consistency and perceptual clarity, rather than mere visual richness, in supporting accurate distance perception in VR.

These empirical findings collectively suggest that the underestimation of egocentric distance, which contributes to conservative behavior in VR, can be reduced through improving visual presentation in VR. In particular, incorporating geometrically and optically informative visual cues is beneficial. Practical design strategies to enhance depth perception in VR may include: a) designing the ground surface with texture patterns (e.g., tiles or grass) that progressively decrease in size and spacing with increasing distance; b) positioning primary light sources so that cast shadows align naturally with the supporting surfaces of objects, visually anchoring them within the scene; and c) incorporating familiar-sized reference objects (e.g., desks or benches) along with environmental boundaries such as walls and fences to provide consistent spatial cues and enhance the perception of scale and enclosure. Crucially, these refinements are achievable without waiting for new hardware, offering researchers an immediate, practical way to improve the ecological validity of VR, regardless of HMD specifications.

Incorporating self-avatars

A practical approach to improving ecological validity is to provide a visually self-referential body representation (e.g., a tracked self-avatar or animated limbs) that updates in real-time with participants’ movements. Multiple studies have shown that providing a self-avatar not only enhances body ownership and presence (Eubanks et al., 2021; Gonzalez-Franco et al., 2024; Jung et al., 2022; Tan et al., 2024; Waltemate et al., 2018) but also serves as a perceptual reference for correcting distance perception (Leyrer et al., 2011; Mohler et al., 2010; Renner et al., 2013; Ries et al., 2008; Yang et al., 2020). For instance, Mohler et al. (2010) investigated how a self-avatar affects distance perception in VR (Mohler et al., 2010). Participants explored a virtual room under different body-visibility conditions, i.e., as an animated self-avatar, a static avatar, or a simple marker, shown either at their position or 3 m in front of them. Afterwards, participants performed a blind-walking task, in which they briefly viewed a target position on the virtual floor and then attempted to walk to that location without visual input. This method is commonly used to assess distance perception accuracy based on the distance participants actually walk toward the remembered target. The results showed that both animated and static self-avatars led participants to walk significantly farther (closer to the true distance) than the line-marker control (animated > static > line).

Similarly, Yang et al. (2020) examined how using experience with a controllable virtual arm (touching vs. not touching targets) and the visual fidelity of the virtual limb (full arm, hand only, or abstract solid circle) affected egocentric distance judgments under matched, lengthened (+30%), and shortened (−30%) arm-length conditions in VR (Yang et al., 2020). They found that participants demonstrated greater accuracy in distance judgments when provided with a full virtual arm representation. More specifically, distance estimates were more accurate when participants had prior interaction experience and when a full virtual arm was shown, compared to less detailed cues (i.e., hand only or solid circle). However, these benefits were evident only when the virtual arm length matched or exceeded the real arm; they were not significant when the arm was shortened.

Moreover, empirical evidence supports the idea that the presence of a self-avatar can reduce conservative behaviors in VR (Pastel et al., 2020; Pastel et al., 2022). In their studies, a narrow-beam walking task was used in which the self-avatar’s visual representation was systematically manipulated in all three conditions: full-body visualization, partial-body visualization (e.g., missing feet), and no avatar (no bodily representation). All other aspects of the VR setup, including motion tracking fidelity and task difficulty, were constant across conditions. Participants performed significantly worse in the no-avatar condition, as indicated by increased task completion time and a greater number of foot placements on the beam. Interestingly, there was no consistent advantage of full-body over partial-body representations, suggesting that the presence of some bodily reference, rather than a fully detailed avatar, was sufficient to enhance performance.

Similar findings were reported by Pan and Steed (2019), in which participants engaged in a puzzle-solving task while navigating a cluttered virtual office. Participants physically walked to collect pieces placed on a table and shelves, while decorative flowers on the floor served as obstacles. To retrieve puzzle pieces, participants were required to move physically through the space and actively avoid obstacles scattered throughout the room. The self-avatar representation was manipulated across three conditions: no avatar (controllers only), a self-avatar with floating, non-tracked feet, and a self-avatar with tracked, synchronized feet. Compared with the floating-feet condition, tracked feet led participants to approach obstacles more closely, with significantly more foot samples recorded within 10 cm of the obstacles (Pan and Steed, 2019).

To summarize, these findings suggest that using the self-avatar is an effective strategy for improving distance perception in VR, potentially reducing conservative behavior. Notably, performance improvements do not always require a fully detailed body; partial embodiments can be sufficient, but the most reliable improvements occur when visual fidelity and control (e.g., tracked limbs and matched proportions) are consistent with the user’s real body. Consistent with this view, Gao et al. (2024) show that, in body-centric locomotion, users’ sense of plausible virtual walking is influenced more by the point of view (in particular, a first-person perspective) and by the mapping between body movements and locomotion control (body parts, transfer functions, and their coefficients) than by the specific level of avatar appearance realism (Gao et al., 2024). Overall, incorporating self-avatars is a key approach to improving the validity of VR.

Optimal calibration processes

Although providing rich visual cues and incorporating a self-avatar can improve distance perception and reduce conservative behaviors in VR, these strategies alone may not fully engage the internal spatial scale representations users develop through real-world experiences. These internalized spatial representations, formed through the integration of visual, vestibular, and proprioceptive cues in real-world environments, are central to accurate distance perception. Therefore, establishing optimal calibration procedures that explicitly transfer real-world spatial awareness into VR is highly effective and widely recommended for improving the ecological validity of VR-based behavior.

As mentioned below, recent empirical studies suggest that perceptual calibration in VR can be effectively achieved through a two-stage process involving both visual anchoring and motor adaptation. The first stage establishes a stable spatial reference by exposing users to a real-world environment that structurally corresponds to the virtual setting. Participants who begin their experience in the real world through direct exploration tend to show improved distance estimation in VR compared with participants who begin in VR (Feldstein et al., 2020; Ziemer et al., 2009). This improvement likely occurs because real-world exposure provides a perceptual anchor that activates internal spatial representations, enabling users to transfer a sense of real-world scale into the virtual context. Conversely, starting in VR without prior exposure to the real environment does not mitigate distance underestimation in VR and may even negatively influence subsequent distance judgments in the physical world, likely due to carryover effects from the virtual experience, but this mechanism remains uncertain.

The second stage of calibration involves motor engagement in VR. Previous studies indicate that actively walking in VR enhances perceptual accuracy. Crucially, these improvements require actual body translation rather than visual optic flow alone (Wiesing and Zimmermann, 2023). For example, Kelly et al. (2013) demonstrated that distance perception significantly improved after participants completed several walking trials toward different targets (Kelly et al., 2013). Similarly, Richardson and Waller (2007) showed that even brief exposure to goal-directed walking reduced distance underestimation in VR (Richardson and Waller, 2007). Notably, the time required for walking calibration was relatively short. In Kelly et al. (2013), participants completed approximately 18 walking trials toward targets placed up to 4 m away, resulting in improved distance estimates. Richardson and Waller (2007) also demonstrated that even a few goal-directed walking trials significantly reduced underestimation error. Importantly, these improvements did not occur with visual optic flow simulation alone, suggesting that actual body translation, involving vestibular and proprioceptive input, is critical for effective spatial calibration.

Indeed, it has been quantitatively demonstrated that walking in VR improves conservative behavior. Nasiri et al. (2022) examined how gait differences between a size-matched real room and its virtual replica evolved over 40 walking trials per condition in novice (<5 h prior to VR) and expert (>20 h) users (Nasiri et al., 2022). Across trials, differences in walking speed and step length between the VR and real-world conditions decreased for both groups, indicating short-term adaptation. Trunk posture showed an expertise-dependent pattern, with experts assuming more upright postures (i.e., reduced forward trunk flexion) and novices leaning forward more, increasing the VR–real discrepancy. These findings suggest that repeated sensorimotor engagement in VR recalibrates perception–action coupling, leading to more realistic motor behavior.

Egocentric distance can be judged accurately using indicators such as open-loop walking and verbal reports, even when continuous visual feedback is absent (Philbeck and Loomis, 1997). In VR, however, the locomotion method and response modality determine which cues are reliable and how perceived distance is translated into action. When visual optic flow is not paired with matching vestibular/proprioceptive feedback or efference copy—as in many artificial locomotion techniques such as joystick-based steering or teleportation—distance estimates and spatial updating tend to degrade, and the effectiveness of distance-estimation training depends on the locomotion technique (Cherep et al., 2020; Keil et al., 2021b). Consistently, comparative studies show that teleportation- and joystick-based locomotion produce poorer distance estimation and path integration performance than real walking or body-driven continuous methods that better preserve self-motion cues (Hussain et al., 2025; Paris et al., 2019). Such degraded contingencies can exacerbate distance underestimation, shorten perceived time-to-contact, and promote conservative avoidance. Preserving natural sensorimotor contingencies is therefore key to calibrated, ecologically valid behavior (Ruddle and Lessels, 2009).

In summary, effective spatial calibration in VR involves two key stages: visual anchoring and motor adaptation. Visual exposure to a corresponding real-world environment activates internal spatial representations, reducing distance underestimation. Subsequent walking in VR further refines perception by aligning visual input with motor output, particularly when vestibular and proprioceptive cues are available. Together, these processes not only improve distance accuracy but also promote more natural gait patterns, thereby enhancing the ecological validity of locomotor behavior in immersive VR.

Conclusions and future directions

This review concludes that immersive VR is a valuable and effective tool for studying human locomotion, offering high experimental control and the ability to simulate complex real-world scenarios. However, evidence indicates that participants in VR often exhibit more conservative patterns despite the overall preservation of fundamental kinematic features and segmental coordination. Given that underestimation of egocentric distance emerged as a key contributor to these behavioral changes, we highlight three practical, empirically supported strategies to mitigate these effects and enhance ecological validity: (a) incorporating rich visual cues (e.g., ground textures, shadows, and familiar landmarks), (b) providing a full-body self-avatar as a spatial reference, and (c) implementing calibration procedures that combine brief real-world exposure with active motor engagement in VR. These approaches are accessible and immediately implementable, requiring no major hardware upgrades. Applying them can reduce perceptual distortions, elicit more naturalistic behavior, and improve the generalizability of VR-based findings to real-world contexts.

One important limitation remains. When comparing behavior in contemporary VR with behavior in the physical world, many studies rely on mean-based null-hypothesis significance testing and interpret non-significant results as evidence of equivalence. In addition, they rarely specify the acceptable magnitude of discrepancy in practice. Proper equivalence testing requires a pre-specified equivalence margin and demonstration that the confidence interval for the effect lies entirely within that margin (Mazzolari et al., 2022). An instructive example is provided by Horsak et al. (2021), who evaluated between-environment differences using the intrarater–intersession minimum detectable change (MDC)—the smallest true change beyond measurement error—reported by Wilken et al. (2012) as a reference threshold (Wilken et al., 2012). Their study employed a two-step procedure: (1) test for statistical significance; and (2) classify differences as clinically or practically meaningful only if they also exceed the MDC. This approach is more robust than significance testing alone, and future work should adopt similar methods to assess the ecological validity of VR while explicitly considering external validity.

In recent years, the increasing availability of devices such as the Quest Pro/3, Apple Vision Pro, and Varjo XR-3 has accelerated the adoption of mixed reality (MR), particularly video see-through (VST) HMDs, for research. Because VST-MR can overlay virtually aligned content onto live views of the physical environment with high spatial registration, users can leverage physical references such as walls and handrails both visually and haptically, enabling natural interactions with real objects and tools. Integrating real and virtual elements within the same image pipeline can also help maintain situational awareness. However, studies report that VST can still produce systematic egocentric distance biases, most notably underestimation (Pfeil et al., 2021; Vaziri et al., 2021). In addition, characteristics of the VST pipeline—such as latency and frame processing—have been shown in recent conference work to influence spatiotemporal gait parameters during overground locomotion (e.g., walking speed, stride length, and toe clearance) (Petrovski et al., 2023), providing initial evidence that camera-mediated visual presentation can meaningfully affect perception–action coupling. Nevertheless, compared with HMD-based VR, the literature on VST-MR remains limited, underscoring the need for further systematic investigation.

Author contributions

KS: Conceptualization, Methodology, Investigation, Writing – review and editing, Writing – original draft. YS: Validation, Writing – review and editing, Investigation, Methodology. TH: Project administration, Methodology, Supervision, Writing – review and editing, Conceptualization.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Agethen, P., Sekar, V. S., Gaisbauer, F., Pfeiffer, T., Otto, M., and Rukzio, E. (2018). Behavior analysis of human locomotion in the real world and virtual reality for the manufacturing industry. Acm Trans. Appl. Percept. 15 (3), 1–19. doi:10.1145/3230648

CrossRef Full Text | Google Scholar

Bohannon, R. W., and Glenney, S. S. (2014). Minimal clinically important difference for change in comfortable gait speed of adults with pathology: a systematic review. J. Eval. Clin. Pract. 20 (4), 295–300. doi:10.1111/jep.12158

PubMed Abstract | CrossRef Full Text | Google Scholar

Bohil, C. J., Alicea, B., and Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12 (12), 752–762. doi:10.1038/nrn3122

PubMed Abstract | CrossRef Full Text | Google Scholar

Borodaeva, Z., Winkler, S., Brade, J., Klimant, P., and Jahn, G. (2023). Spatial updating in virtual reality for reproducing object locations in vista space-Boundaries, landmarks, and idiothetic cues. Front. Psychol. 14, 1144861. doi:10.3389/fpsyg.2023.1144861

PubMed Abstract | CrossRef Full Text | Google Scholar

Brookes, J., Warburton, M., Alghadier, M., Mon-Williams, M., and Mushtaq, F. (2020). Studying human behavior with virtual reality: the Unity experiment framework. Behav. Res. Methods 52 (2), 455–463. doi:10.3758/s13428-019-01242-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Bühler, M. A., and Lamontagne, A. (2018). Circumvention of pedestrians while walking in virtual and physical environments. IEEE Trans. Neural Syst. Rehabilitation Eng. 26 (9), 1813–1822. doi:10.1109/TNSRE.2018.2865907

PubMed Abstract | CrossRef Full Text | Google Scholar

Bühler, M. A., and Lamontagne, A. (2019). Locomotor circumvention strategies in response to static pedestrians in a virtual and physical environment. Gait Posture 68, 201–206. doi:10.1016/j.gaitpost.2018.10.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Bühler, M. A., and Lamontagne, A. (2022). Coordinating clearance and postural reorientation when avoiding physical and virtual pedestrians. IEEE Trans. Neural Syst. Rehabilitation Eng. 30, 1612–1620. doi:10.1109/TNSRE.2022.3181817

PubMed Abstract | CrossRef Full Text | Google Scholar

Cherep, L. A., Lim, A. F., Kelly, J. W., Acharya, D., Velasco, A., Bustamante, E., et al. (2020). Spatial cognitive implications of teleporting through virtual environments. J. Exp. Psychol. Appl. 26 (3), 480–492. doi:10.1037/xap0000263

PubMed Abstract | CrossRef Full Text | Google Scholar

Cipresso, P., Giglioli, I. A. C., Raya, M. A., and Riva, G. (2018). The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature. Front. Psychol. 9, 2086. doi:10.3389/fpsyg.2018.02086

PubMed Abstract | CrossRef Full Text | Google Scholar

Combe, T., Chardonnet, J. R., Merienne, F., and Ovtcharova, J. (2023). CAVE and HMD: distance perception comparative study. Virtual Real 27, 1–11. doi:10.1007/s10055-023-00787-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Creem-Regehr, S. H., Stefanucci, J. K., and Bodenheimer, B. (2023). Perceiving distance in virtual reality: theoretical insights from contemporary technologies. Philosophical Trans. R. Soc. Lond. B Biol. Sci. 378 (1869), 20210456. doi:10.1098/rstb.2021.0456

PubMed Abstract | CrossRef Full Text | Google Scholar

Cutting, J. E., and Vishton, P. M. (1995). Chapter 3 - perceiving layout and knowing distances: the integration, relative potency, and contextual use of different information about depth. In W. Epstein, and S. Rogers (Eds.), Perception of space and motion (69–117). Academic Press. doi:10.1016/B978-012240530-3/50005-5

CrossRef Full Text | Google Scholar

de la Rosa, S., and Breidt, M. (2018). Virtual reality: a new track in psychological research. Br. J. Psychol. 109 (3), 427–430. doi:10.1111/bjop.12302

PubMed Abstract | CrossRef Full Text | Google Scholar

Dussutour, A., Fourcassie, V., Helbing, D., and Deneubourg, J. L. (2004). Optimal traffic organization in ants under crowded conditions. Nature 428 (6978), 70–73. doi:10.1038/nature02345

PubMed Abstract | CrossRef Full Text | Google Scholar

El Jamiy, F., and Marsh, R. (2019). Survey on depth perception in head mounted displays: distance estimation in virtual reality, augmented reality, and mixed reality. IET Image Process. 13 (5), 707–712. doi:10.1049/iet-ipr.2018.5920

CrossRef Full Text | Google Scholar

Ernst, M. O., and Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends Cognitive Sci. 8 (4), 162–169. doi:10.1016/j.tics.2004.02.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Eubanks, J. C., Moore, A. G., Fishwick, P. A., and McMahan, R. P. (2021). A preliminary embodiment short questionnaire. Front. Virtual Real. 2, 647896. doi:10.3389/frvir.2021.647896

CrossRef Full Text | Google Scholar

Fajen, B. R., and Warren, W. H. (2003). Behavioral dynamics of steering, obstacle avoidance, and route selection. J. Exp. Psychol. Hum. Percept. Perform. 29 (2), 343–362. doi:10.1037/0096-1523.29.2.343

PubMed Abstract | CrossRef Full Text | Google Scholar

Faria, A. L., Latorre, J., Silva Cameirao, M., Bermudez, I. B. S., and Llorens, R. (2023). Ecologically valid virtual reality-based technologies for assessment and rehabilitation of acquired brain injury: a systematic review. Front. Psychol. 14, 1233346. doi:10.3389/fpsyg.2023.1233346

PubMed Abstract | CrossRef Full Text | Google Scholar

Feldstein, I. T., Kolsch, F. M., and Konrad, R. (2020). Egocentric distance perception: a comparative study investigating differences between real and virtual environments. Perception 49 (9), 940–967. doi:10.1177/0301006620951997

PubMed Abstract | CrossRef Full Text | Google Scholar

Fink, P. W., Foo, P. S., and Warren, W. H. (2007). Obstacle avoidance during walking in real and virtual environments. Acm Trans. Appl. Percept. 4 (1), 2. doi:10.1145/1227134.1227136

CrossRef Full Text | Google Scholar

Gao, Y., Peillard, E., Normand, J. M., Moreau, G., Liu, Y., and Wang, Y. (2020). Influence of virtual objects' shadows and lighting coherence on distance perception in optical see-through augmented reality. J. Soc. Inf. Disp. 28 (2), 117–135. doi:10.1002/jsid.832

CrossRef Full Text | Google Scholar

Gao, B., Zheng, H., Zhao, J., Tu, H., Kim, H., and Duh, H. B.-L. (2024). Evaluating plausible preference of body-centric locomotion using subjective matching in virtual reality 2024 IEEE conference virtual reality and 3D user interfaces (VR).

Google Scholar

Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.

Google Scholar

Gonzalez-Franco, M., Steed, A., Berger, C. C., and Tajadura-Jiménez, A. (2024). The impact of first-person avatar customization on embodiment in immersive virtual reality. Front. Virtual Real. 5, 1436752. doi:10.3389/frvir.2024.1436752

CrossRef Full Text | Google Scholar

Guzsvinecz, T., Perge, E., and Szucs, J. (2023). Examining the results of virtual reality-based egocentric distance estimation tests based on immersion level. Sensors (Basel) 23 (6), 3138. doi:10.3390/s23063138

PubMed Abstract | CrossRef Full Text | Google Scholar

Hecht, H., Ramdohr, M., and von Castell, C. (2018). Underestimation of large distances in active and passive locomotion. Exp. Brain Res. 236 (6), 1603–1609. doi:10.1007/s00221-018-5245-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Hepperle, D., and Wölfel, M. (2023). Similarities and differences between immersive virtual reality, real world, and computer screens: a systematic scoping review in human behavior studies. Multimodal Technol. Interact. 7 (6), 56. doi:10.3390/mti7060056

CrossRef Full Text | Google Scholar

Hmaiti, Y., Maslych, M., Ghasemaghaei, A., Ghamandi, R. K., and LaViola, J. J. (2024). Visual perceptual confidence: exploring discrepancies between self-reported and actual distance perception in virtual reality. IEEE Trans. Vis. Comput. Graph 30 (11), 7245–7254. doi:10.1109/TVCG.2024.3456165

PubMed Abstract | CrossRef Full Text | Google Scholar

Hornsey, R. L., and Hibbard, P. B. (2021). Contributions of pictorial and binocular cues to the perception of distance in virtual reality. Virtual Real. 25 (4), 1087–1103. doi:10.1007/s10055-021-00500-x

CrossRef Full Text | Google Scholar

Hornsey, R. L., and Hibbard, P. B. (2024). Distance mis-estimations can be reduced with specific shadow locations. Sci. Rep. 14 (1), 9566. doi:10.1038/s41598-024-58786-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Horsak, B., Simonlehner, M., Schoffer, L., Dumphart, B., Jalaeefar, A., and Husinsky, M. (2021). Overground walking in a fully immersive virtual reality: a comprehensive study on the effects on full-body walking biomechanics. Front. Bioeng. Biotechnol. 9, 780314. doi:10.3389/fbioe.2021.780314

PubMed Abstract | CrossRef Full Text | Google Scholar

Horsak, B., Simonlehner, M., Dumphart, B., and Siragy, T. (2023). Overground walking while using a virtual reality head mounted display increases variability in trunk kinematics and reduces dynamic balance in young adults. Virtual Real. 27 (4), 3021–3032. doi:10.1007/s10055-023-00851-7

CrossRef Full Text | Google Scholar

Hussain, R., Chessa, M., and Solari, F. (2025). “A comparative study on locomotion methods and distance perception in immersive virtual reality 2025,” in IEEE conference on virtual reality and 3D user interfaces abstracts and workshops (VRW). doi:10.1109/VRW66409.2025.00275

CrossRef Full Text | Google Scholar

Janeh, O., Langbehn, E., Steinicke, F., Bruder, G., Gulberti, A., and Poetter-Nerger, M. (2017). Walking in virtual reality: effects of manipulated visual self-motion on walking biomechanics. Acm Trans. Appl. Percept. 14 (2), 1–15. doi:10.1145/3022731

CrossRef Full Text | Google Scholar

Janeh, O., Bruder, G., Steinicke, F., Gulberti, A., and Poetter-Nerger, M. (2018). Analyses of gait parameters of younger and older adults during (Non-)Isometric virtual walking. IEEE Trans. Vis. Comput. Graph 24 (10), 2663–2674. doi:10.1109/TVCG.2017.2771520

PubMed Abstract | CrossRef Full Text | Google Scholar

Jung, M., Sim, S., Kim, J., and Kim, K. (2022). Impact of personalized avatars and motion synchrony on embodiment and users’ subjective experience: empirical study. JMIR Serious Games 10 (4), e40119. doi:10.2196/40119

PubMed Abstract | CrossRef Full Text | Google Scholar

Keil, J., Edler, D., O’Meara, D., Korte, A., and Dickmann, F. (2021a). Effects of virtual reality locomotion techniques on distance estimations. ISPRS Int. J. Geo-Information 10 (3), 150. doi:10.3390/ijgi10030150

CrossRef Full Text | Google Scholar

Keil, J., Korte, A., Edler, D., O‘Meara, D., and Dickmann, F. (2021b). Changes of locomotion speed affect distance estimations in virtual reality. Proc. ICA 4, 1–5. doi:10.5194/ica-proc-4-57-2021

CrossRef Full Text | Google Scholar

Kelly, J. W. (2023). Distance perception in virtual reality: a meta-analysis of the effect of head-mounted display characteristics. IEEE Trans. Vis. Comput. Graph 29 (12), 4978–4989. doi:10.1109/TVCG.2022.3196606

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelly, J. W., Donaldson, L. S., Sjolund, L. A., and Freiberg, J. B. (2013). More than just perception-action recalibration: walking through a virtual environment causes rescaling of perceived space. Atten. Percept. Psychophys. 75 (7), 1473–1485. doi:10.3758/s13414-013-0503-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelly, J. W., Doty, T. A., Ambourn, M., and Cherep, L. A. (2022). Distance perception in the oculus quest and oculus quest 2. Front. Virtual Real. 3, 850471. doi:10.3389/frvir.2022.850471

CrossRef Full Text | Google Scholar

Korshunova-Fucci, V., van Himbergen, F. F., Fan, H. M., Kohlrausch, A., and Cuijpers, R. H. (2024). Quantifying egocentric distance perception in virtual reality environment. Int. J. Human-Computer Interact. 40 (18), 5431–5442. doi:10.1080/10447318.2023.2234117

CrossRef Full Text | Google Scholar

Landy, M. S., Maloney, L. T., Johnston, E. B., and Young, M. (1995). Measurement and modeling of depth cue combination: in defense of weak fusion. Vis. Res., 35(3), 389–412. doi:10.1016/0042-6989(94)00176-M

PubMed Abstract | CrossRef Full Text | Google Scholar

Leyrer, M., Linkenauger, S. A., Bülthoff, H. H., Kloos, U., and Mohler, B. (2011). “The influence of eye height and avatars on egocentric distance estimates in immersive virtual environments,” in Proceedings of the ACM SIGGRAPH symposium on applied perception in graphics and visualization. Toulouse, France. doi:10.1145/2077451.2077464

CrossRef Full Text | Google Scholar

Mani Bharathi, V., Manimegalai, P., George, S. T., Pamela, D., Mohammed, M. A., Abdulkareem, K. H., et al. (2024). A systematic review of techniques and clinical evidence to adopt virtual reality in post-stroke upper limb rehabilitation. Virtual Real. 28 (4), 172. doi:10.1007/s10055-024-01065-1

CrossRef Full Text | Google Scholar

Martelli, D., Xia, B., Prado, A., and Agrawal, S. K. (2019). Gait adaptations during overground walking and multidirectional oscillations of the visual field in a virtual reality headset. Gait Posture 67, 251–256. doi:10.1016/j.gaitpost.2018.10.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Maruhn, P., Schneider, S., and Bengler, K. (2019). Measuring egocentric distance perception in virtual reality: influence of methodologies, locomotion and translation gains. PloS One 14 (10), e0224651. doi:10.1371/journal.pone.0224651

PubMed Abstract | CrossRef Full Text | Google Scholar

Masnadi, S., Pfeil, K., Sera-Josef, J.-V. T., and LaViola, J. (2022). Effects of field of view on egocentric distance perception in virtual reality CHI conference on human factors in computing systems.

Google Scholar

Mason, A. H., Padilla, A. S., Peer, A., Toepfer, M., Ponto, K., and Pickett, K. A. (2023). The role of the visual environment on characteristics of over-ground locomotion in natural and virtual environments. Int. J. Human-Computer Stud. 169, 102929. doi:10.1016/j.ijhcs.2022.102929

CrossRef Full Text | Google Scholar

Mazzolari, R., Porcelli, S., Bishop, D. J., and Lakens, D. (2022). Myths and methodologies: the use of equivalence and non-inferiority tests for interventional studies in exercise physiology and sport science. Exp. Physiol. 107 (3), 201–212. doi:10.1113/EP090171

PubMed Abstract | CrossRef Full Text | Google Scholar

Mohler, B. J., Creem-Regehr, S. H., Thompson, W. B., and Bülthoff, H. H. (2010). The effect of viewing a self-avatar on distance judgments in an HMD-based virtual environment. Presence-Teleoperators Virtual Environ. 19 (3), 230–242. doi:10.1162/pres.19.3.230

CrossRef Full Text | Google Scholar

Nasiri, M., Anaraky, R. G., Babu, S. V., and Robb, A. (2022). Gait differences in the real world and virtual reality: the effect of prior virtual reality experience 2022 IEEE international symposium on mixed and augmented reality (ISMAR).

Google Scholar

Nguyen, T. D., Cremer, J. F., Kearney, J. K., and Plumert, J. M. (2011). Effects of scene density and richness on traveled distance estimation in virtual environments proceedings of the ACM SIGGRAPH symposium on applied perception in graphics and visualization. Toulouse, France. doi:10.1145/2077451.2077466

CrossRef Full Text | Google Scholar

Ono, H., Shimono, K., and Shibuta, K. (1992). Occlusion as a depth cue in the wheatstone-panum limiting case. Percept. & Psychophys. 51 (1), 3–13. doi:10.3758/bf03205069

PubMed Abstract | CrossRef Full Text | Google Scholar

Padilla, A. S., Toepfer, M., Peer, A., Ponto, K., Pickett, K. A., and Mason, A. H. (2023). Effects of habituation on spatiotemporal gait measures in younger adults. PRESENCE Virtual Augmented Real. 32, 129–146. doi:10.1162/pres_a_00405

CrossRef Full Text | Google Scholar

Palmisano, C., Kullmann, P., Hanafi, I., Verrecchia, M., Latoschik, M. E., Canessa, A., et al. (2022). A fully-immersive virtual reality setup to study gait modulation. Front. Hum. Neurosci. 16, 783452. doi:10.3389/fnhum.2022.783452

PubMed Abstract | CrossRef Full Text | Google Scholar

Pan, X., and Hamilton, A. F. C. (2018). Why and how to use virtual reality to study human social interaction: the challenges of exploring a new research landscape. Br. J. Psychol. 109 (3), 395–417. doi:10.1111/bjop.12290

PubMed Abstract | CrossRef Full Text | Google Scholar

Pan, Y., and Steed, A. (2019). How foot tracking matters: the impact of an animated self-avatar on interaction, embodiment and presence in shared virtual environments. Front. Robot. AI 6, 104. doi:10.3389/frobt.2019.00104

PubMed Abstract | CrossRef Full Text | Google Scholar

Paris, R., Klag, J., Rajan, P., Buck, L., McNamara, T. P., and Bodenheimer, B. (2019). How Video game locomotion methods affect navigation in virtual environments ACM symposium on applied perception 2019.

Google Scholar

Parsons, T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Front. Hum. Neurosci. 9, 660. doi:10.3389/fnhum.2015.00660

PubMed Abstract | CrossRef Full Text | Google Scholar

Pastel, S., Chen, C. H., Petri, K., and Witte, K. (2020). Effects of body visualization on performance in head-mounted display virtual reality. PloS One 15 (9), e0239226. doi:10.1371/journal.pone.0239226

PubMed Abstract | CrossRef Full Text | Google Scholar

Pastel, S., Petri, K., Burger, D., Marschal, H., Chen, C. H., and Witte, K. (2022). Influence of body visualization in VR during the execution of motoric tasks in different age groups. PloS One 17 (1), e0263112. doi:10.1371/journal.pone.0263112

PubMed Abstract | CrossRef Full Text | Google Scholar

Personeni, G., and Savescu, A. (2023). Ecological validity of virtual reality simulations in workstation health and safety assessment. Front. Virtual Real. 4, 1058790. doi:10.3389/frvir.2023.1058790

CrossRef Full Text | Google Scholar

Petrovski, D., Tirosh, O., McCarthy, C., and Kameneva, T. (2023). Video see-through pipelines for virtual reality headsets and their impact on gait. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2023, 1–4. doi:10.1109/embc40787.2023.10340876

PubMed Abstract | CrossRef Full Text | Google Scholar

Pfeil, K., Masnadi, S., Belga, J., Sera-Josef, J.-V. T., and LaViola, J. (2021). Distance perception with a video see-through head-mounted display proceedings of the 2021 CHI conference on human factors in computing systems. Yokohama, Japan. doi:10.1145/3411764.3445223

CrossRef Full Text | Google Scholar

Philbeck, J. W., and Loomis, J. M. (1997). Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions. J. Exp. Psychol. Hum. Percept. Perform. 23 (1), 72–85. doi:10.1037//0096-1523.23.1.72

PubMed Abstract | CrossRef Full Text | Google Scholar

Renner, R. S., Velichkovsky, B. M., and Helmert, J. R. (2013). The perception of egocentric distances in virtual environments - a review. ACM Comput. Surv. 46 (2), 1–40. doi:10.1145/2543581.2543590

CrossRef Full Text | Google Scholar

Richardson, A. R., and Waller, D. (2007). Interaction with an immersive virtual environment corrects users' distance estimates. Hum. Factors 49 (3), 507–517. doi:10.1518/001872007X200139

PubMed Abstract | CrossRef Full Text | Google Scholar

Ries, B., Interrante, V., Kaeding, M., and Anderson, L. (2008). “The effect of self-embodiment on distance perception in immersive virtual environments proceedings of the 2008 ACM symposium on virtual reality software and technology,”. Bordeaux, France, 167–170. doi:10.1145/1450579.1450614

CrossRef Full Text | Google Scholar

Rogers, B. (2009). Motion parallax as an independent cue for depth perception: a retrospective. Perception 38 (6), 907–911. doi:10.1068/pmkrog

PubMed Abstract | CrossRef Full Text | Google Scholar

Ruddle, R. A., and Lessels, S. (2009). The benefits of using a walking interface to navigate virtual environments. ACM Trans. Computer-Human Interact. 16 (1), 1–18. doi:10.1145/1502800.1502805

CrossRef Full Text | Google Scholar

Rzepka, A. M., Hussey, K. J., Maltz, M. V., Babin, K., Wilcox, L. M., and Culham, J. C. (2023). Familiar size affects perception differently in virtual reality and the real world. Philosophical Trans. R. Soc. Lond. B Biol. Sci. 378 (1869), 20210464. doi:10.1098/rstb.2021.0464

PubMed Abstract | CrossRef Full Text | Google Scholar

Sato, K., and Higuchi, T. (2025). Enhancing collision prediction in older adults via perceptual training in virtual reality emphasizing object expansion. Front. Sports Act. Living 7, 1652911. doi:10.3389/fspor.2025.1652911

PubMed Abstract | CrossRef Full Text | Google Scholar

Sato, K., Fukuhara, K., and Higuchi, T. (2023). Age-related changes in the utilization of visual information for collision prediction: a study using an affordance-based model. Exp. Aging Res. 50 (5), 800–816. doi:10.1080/0361073X.2023.2278985

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmuckler, M. A. (2001). What is ecological validity? A dimensional analysis. Infancy 2 (4), 419–436. doi:10.1207/S15327078IN0204_02

PubMed Abstract | CrossRef Full Text | Google Scholar

Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Trans. R. Soc. Lond. B Biol. Sci. 364 (1535), 3549–3557. doi:10.1098/rstb.2009.0138

PubMed Abstract | CrossRef Full Text | Google Scholar

Slater, M., Banakou, D., Beacco, A., Gallego, J., Macia-Varela, F., and Oliva, R. (2022). A separate reality: an update on place illusion and plausibility in virtual reality. Front. Virtual Real. 3, 914392. doi:10.3389/frvir.2022.914392

CrossRef Full Text | Google Scholar

Sousa, R., Brenner, E., and Smeets, J. B. (2011). Judging an unfamiliar object's distance from its retinal image size. J. Vis. 11 (9), 10. doi:10.1167/11.9.10

PubMed Abstract | CrossRef Full Text | Google Scholar

Sporrer, J. K., Brookes, J., Hall, S., Zabbah, S., Serratos Hernandez, U. D., and Bach, D. R. (2023). Functional sophistication in human escape. iScience 26 (11), 108240. doi:10.1016/j.isci.2023.108240

PubMed Abstract | CrossRef Full Text | Google Scholar

Steinicke, F., Bruder, G., Jerald, J., Frenz, H., and Lappe, M. (2010). Estimation of detection thresholds for redirected walking techniques. IEEE Trans. Vis. Comput. Graph 16 (1), 17–27. doi:10.1109/TVCG.2009.62

PubMed Abstract | CrossRef Full Text | Google Scholar

Tan, G., Uchitomi, H., Isobe, R., and Miyake, Y. (2024). Sense of embodiment with synchronized avatar during walking in mixed reality. Sci. Rep. 14 (1), 21198. doi:10.1038/s41598-024-72095-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Thurley, K. (2022). Naturalistic neuroscience and virtual reality. Front. Syst. Neurosci. 16, 896251. doi:10.3389/fnsys.2022.896251

PubMed Abstract | CrossRef Full Text | Google Scholar

Tresilian, J. R., Mon-Williams, M., and Kelly, B. M. (1999). Increasing confidence in vergence as a cue to distance. Proc. Biol. Sci. 266 (1414), 39–44. doi:10.1098/rspb.1999.0601

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaziri, K., Bondy, M., Bui, A., and Interrante, V. (2021). Egocentric distance judgments in full-cue video-see-through VR conditions are No better than distance judgments to targets in a void. 2021 IEEE virtual reality and 3D user interfaces (VR).

Google Scholar

Waki, R., Sato, K., Inoue, J., Yamada, M., and Higuchi, T. (2025). How far along the future path do individuals recognize the path for stepping on multiple footfall targets? A new evaluation method under virtual reality. Front. Sports Act. Living 7, 1526576. doi:10.3389/fspor.2025.1526576

PubMed Abstract | CrossRef Full Text | Google Scholar

Waller, D., Bachmann, E., Hodgson, E., and Beall, A. C. (2007). The HIVE: a huge immersive virtual environment for research in spatial cognition. Behav. Res. Methods 39 (4), 835–843. doi:10.3758/BF03192976

PubMed Abstract | CrossRef Full Text | Google Scholar

Waltemate, T., Gall, D., Roth, D., Botsch, M., and Latoschik, M. E. (2018). The impact of avatar personalization and immersion on virtual body ownership, presence, and emotional response. IEEE Trans. Vis. Comput. Graph. 24 (4), 1643–1652. doi:10.1109/TVCG.2018.2794629

PubMed Abstract | CrossRef Full Text | Google Scholar

Wiesing, M., and Zimmermann, E. (2023). Serial dependencies between locomotion and visual space. Sci. Rep. 13 (1), 3302. doi:10.1038/s41598-023-30265-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilken, J. M., Rodriguez, K. M., Brawner, M., and Darter, B. J. (2012). Reliability and minimal detectible change values for gait kinematics and kinetics in healthy adults. Gait Posture 35 (2), 301–307. doi:10.1016/j.gaitpost.2011.09.105

PubMed Abstract | CrossRef Full Text | Google Scholar

Wrzus, C., Frenkel, M. O., and Schone, B. (2024). Current opportunities and challenges of immersive virtual reality for psychological research and application. Acta Psychol. 249, 104485. doi:10.1016/j.actpsy.2024.104485

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, J., He, Z. J., and Ooi, T. L. (2008). Perceived relative distance on the ground affected by the selection of depth information. Percept. & Psychophys. 70 (4), 707–713. doi:10.3758/pp.70.4.707

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, Z., Shi, J., Xiao, Y., Yuan, X., Wang, D., Li, H., et al. (2020). Influences of experience and visual cues of virtual arm on distance perception. Iperception 11 (1), 2041669519901134. doi:10.1177/2041669519901134

PubMed Abstract | CrossRef Full Text | Google Scholar

Yoonessi, A., and Baker, C. L., Jr. (2013). Depth perception from dynamic occlusion in motion parallax: roles of expansion-compression versus accretion-deletion. J. Vis. 13 (12), 10. doi:10.1167/13.12.10

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, J., Yang, X., Jin, Z., and Li, L. (2021). Distance estimation in virtual reality is affected by both the virtual and the real-world environments. Iperception 12 (3), 20416695211023956. doi:10.1177/20416695211023956

PubMed Abstract | CrossRef Full Text | Google Scholar

Ziemer, C. J., Plumert, J. M., Cremer, J. F., and Kearney, J. K. (2009). Estimating distance in real and virtual environments: does order make a difference? Atten. Percept. Psychophys. 71 (5), 1095–1106. doi:10.3758/APP.71.5.1096

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: virtual reality, VR, ecological validity, human locomotion, overground walking, collision avoidance, egocentric distance underestimation, head-mounted display

Citation: Sato K, Suda Y and Higuchi T (2026) Assessing and enhancing the ecological validity of human locomotion in virtual reality. Front. Virtual Real. 6:1699143. doi: 10.3389/frvir.2025.1699143

Received: 04 September 2025; Accepted: 08 December 2025;
Published: 08 January 2026.

Edited by:

Weiya Chen, Huazhong University of Science and Technology, China

Reviewed by:

Yonatan Hutabarat, University of Bonn, Germany
Razeen Hussain, University of Genoa, Italy

Copyright © 2026 Sato, Suda and Higuchi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Takahiro Higuchi, aGlndWNoaXRAdG11LmFjLmpw

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.