<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality</link>
        <description>RSS Feed for Frontiers in Virtual Reality | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-10T23:22:45.197+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1730408</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1730408</link>
        <title><![CDATA[Generational differences in visual engagement: applying the visual interaction analysis (VIA) methodology composed of eye-tracking and virtual reality]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cristobal Rodolfo Guerra-Tamez</author><author>Pedro Daniel Rodríguez Sánchez</author>
        <description><![CDATA[Understanding visual engagement in marketing is crucial for optimizing user experience and enhancing campaign effectiveness. This study applies the Visual Interaction Analysis (VIA) methodology as a controlled VR-based protocol combining immersive virtual reality with eye-tracking to quantify visual attention allocation and examine generational and gender differences in engagement with advertising materials. Using the Cognitive3D platform, fixation-based metrics—Total Fixations (TF), Total Duration of Fixations (TDF), and Average Duration per Fixation (ADF)—were analyzed for Millennials and Generation Z participants. A total of 82 participants were recruited; inferential analyses were conducted on the valid eye-tracking subsample (N = 44; 22 Millennials, 22 Gen Z). Results revealed cohort differences in visual attention allocation for this poster stimulus under controlled VR free-viewing conditions: Generation Z exhibited higher fixation counts and longer total fixation durations than Millennials. In contrast, average fixation duration did not differ between generations, suggesting comparable moment-to-moment attentional processing. Within Generation Z, female participants showed longer average fixation durations than males. These findings demonstrate how fixation-based eye-tracking in controlled VR environments can provide actionable diagnostics of visual attention distribution, supporting pre-launch optimization of marketing and communication materials.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1757871</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1757871</link>
        <title><![CDATA[Two limbs are better than one: multi-limb tactons for precise hand navigation]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Takashige Suzuki</author><author>Kizashi Nakano</author><author>Takuji Narumi</author><author>Hideaki Kuzuoka</author>
        <description><![CDATA[Virtual Reality (VR) training systems often rely on visual cues, which can compete for the user’s attention, particularly in high-skill domains like surgery or assembly. While vibrotactile guidance offers a non-visual alternative, current single-limb systems suffer from limited spatial resolution, restricting their directional precision for complex tasks. To overcome this limitation, we propose a multi-limb “Tacton (symbolic vibrotactile patterns)” strategy that distributes vibrotactile information across anatomically distinct limbs (the wrist and ankle). We conducted two experiments to validate this approach and determine the optimal reference frame for inter-limb coordination. Experiment 1 (N=12) evaluated the distribution strategy by comparing the directional precision of single-limb (wrist and forearm) versus multi-limb (wrist and ankle) configurations using a novel temporal pattern encoding for 32 unique directions. Results demonstrated that distributing cues significantly improved precision, reducing angular errors to under 22.5° in over 77% of trials compared to the single-limb condition. Experiment 2 (N=12) addressed the cognitive challenge of coordinating these distributed signals by comparing a body-based “Skeletal” frame with an environment-based “World” reference frame. The “World” frame, which maps cues to an allocentric coordinate system, yielded substantially faster reaction times and lower angular errors than the “Skeletal” frame, minimizing the cognitive load associated with mental rotation. We conclude that high-precision, non-visual hand guidance is best achieved by distributing symbolic haptic cues across separate limbs and mapping them to a stable, allocentric coordinate system. These findings provide foundational design principles for creating immersive, hands-free guidance systems that preserve the user’s visual-attentional resources.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1806316</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1806316</link>
        <title><![CDATA[Saccadic undershooting in gaze generation for virtual characters]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ruoting Lian</author><author>Hironori Mitake</author><author>Shoichi Hasegawa</author>
        <description><![CDATA[Saccades are the primary eye movements used to shift gaze and play an important role in generating realistic gaze behavior for virtual characters. While many gaze generation studies focus on high-level attention or target selection, finer characteristics of human eye movements–such as saccadic undershooting–have received relatively little attention. In this paper, we propose a psychologically plausible gaze generation model that explicitly incorporates saccadic undershooting to produce more human-like gaze behavior. The model parameters were derived from eye-tracking data collected in a VR environment and integrated into a gaze generation framework for virtual characters. Quantitative evaluation using leave-one-participant-out cross-validation shows that the proposed model reproduces the undershooting patterns observed in human data and achieves lower absolute errors compared with a representative existing model and a random baseline. A subjective user study further indicates that participants can perceive differences between gaze behaviors generated by the models. Although no significant differences were found in median ratings of human-likeness, roboticness, or head–eye coordination, the bimodal rating patterns suggest that subtle variations in gaze behavior may influence users’ perceptions of virtual characters.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1797341</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1797341</link>
        <title><![CDATA[Applications of extended reality within the shipbuilding industry: a systematic literature review]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Joni Rajamäki</author><author>Mirva Tapola</author><author>Olli Heimo</author><author>Teijo Lehtonen</author><author>Jaakko Järvi</author>
        <description><![CDATA[Extended reality includes virtual reality which places the user into a virtual world and augmented reality which adds virtual elements to the real world. Extended reality has been touted as a pivotal technology as part of industry 4.0, but has yet to make a significant impact in industrial applications. Shipbuilding is a longstanding and traditional branch of industry which is characterized as slow to innovate. The importance of shipbuilding is rising as regions like the arctic are being unlocked and as a result additional demand is placed onto shipyards. In order to better accommodate increased demands, novel means for better efficiency are welcomed within shipbuilding. This article presents a systematic literature review analyzing the research in the use of extended reality within the shipbuilding industry. The focuses of this review are on the current extent of research being conducted, how different sub technologies of extended reality overlap with different phases of shipbuilding, and how technology is evaluated and what kind of value can be derived from current research. A total of 44 articles from nine sources are reviewed. The results indicate an overall early state of research characterized by a heavy focus on pilot research. Clear use cases for extended reality solutions are identified and some instances of demonstrable value for shipbuilding operations are presented. Shortcomings in the current research and potential future directions are also outlined.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1713691</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1713691</link>
        <title><![CDATA[Virtual reality for pain management during metal fixed appliances removal: a multicentric before and after study]]></title>
        <pubdate>2026-04-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ines Bouhlal</author><author>Aurélie Mailloux</author>
        <description><![CDATA[BackgroundNumerous studies have reported that the use of virtual reality (VR) can reduce patient anxiety and pain during medical care. Although patients may experience pain when their fixed braces are removed, this topic is poorly documented.MethodsThis article evaluates the impact of a VR system on pain in patients during debonding of their orthodontic metal appliances, through a multicentric observational before-after study that included 66 patients (33 before and 33 after) who removed their metal orthodontic brackets between September 2023 and December 2024, Secondary objectives were to identify the possible association between patient age, sex, clinician and pain. We compared demographic data with the Chi2 and Student’s t-test (sex and age), anxiety before debonding, and pain scores on a ten-point scale (general and dental zones). A multivariate analysis was performed for the total pain score at a constant anxiety score.ResultsThe median pain score and extreme values were equal or lower in the VR group for all dental zones without significant differences. When adding the zone scores, the median total pain score was also lower: NoVR = 12 (6–21); VR = 10 (3–15). The VR device reduced the total pain score (addition of all dental zones) by an average of 6.2 points (on a total of 60 points), at constant anxiety (P = 0.0048).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1743641</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1743641</link>
        <title><![CDATA[Implementing IVR for family therapy training: a prototype for first family therapy sessions]]></title>
        <pubdate>2026-03-30T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>M. Everri</author><author>A. C. Queiroz</author><author>M. Heitmayer</author><author>A. Campbell</author><author>F. Balestra</author><author>D. Hanley</author><author>L. Fruggeri</author><author>J. Jacobson</author><author>M. Messena</author><author>T. Merrick</author><author>V. O’Brien</author><author>D. S. Rait</author><author>A. Roberts</author><author>S. Thaker</author>
        <description><![CDATA[Role plays and live supervision have been core methods in family therapy education, offering trainees experiential opportunities to practice therapeutic techniques, engage in reflexivity, and develop systemic awareness. However, these traditional methods face limitations in scalability, standardization, and emotional safety. Immersive Virtual Reality (IVR) —a technology capable of eliciting realistic affective and cognitive responses through a sense of presence—presents new possibilities for addressing these challenges. Drawing upon research in simulation-based learning, this article explores how IVR can enhance the acquisition of core family therapy competencies (technical skills, relational, epistemological, and context sensitivity). The paper synthesizes existing family therapy education models and methods and IVR-based training research. It highlights the unique pedagogical affordances of IVR, i.e., embodied perspective-taking, emotional safety, standardization, and repeatability, and links these to family therapy training goals. An IVR prototype developed by the authors simulates a first family therapy session, providing a proof of concept for integrating virtual simulations into therapist education. Preliminary feedback from professionals indicates that IVR can foster engagement and self-reflexivity, though challenges remain regarding content realism, cost, and trainers’ digital skills. The article concludes by identifying future directions for research and practice, emphasizing the need for interdisciplinary collaboration, empirical validation, and ethical frameworks to guide the responsible implementation of IVR in family therapy education.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1693453</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1693453</link>
        <title><![CDATA[Development and adaptation of virtual reality tools for PTSD and place-attachment treatment in Israel after the October 7th attacks]]></title>
        <pubdate>2026-03-27T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Ehud Bodner</author><author>Ilan Vol</author><author>Dotan Bar-Natan</author><author>Albert Rizzo</author><author>Mario Mikulincer</author><author>Shachar Maidenbaum</author>
        <description><![CDATA[The October 7 attacks caused significant psychological trauma among Israeli soldiers and civilians. Virtual reality (VR) has shown promise in PTSD treatment, particularly through BraveMind, a validated Prolonged Exposure (PE) VR system developed for U.S. veterans. This formative study examined how existing VR tools can be adapted to the Israeli context and extended beyond exposure to include place-attachment-based therapy for displaced civilians. Seven experienced clinicians participated in a focus-group evaluation of two systems: the original U.S.-based BraveMind and a newly developed place-attachment VR prototype (Re-PAVeR). Participants individually experienced brief VR exposures and completed questionnaires assessing presence, user experience, and perceived clinical utility, followed by a structured group discussion. Clinicians reported a strong sense of presence and positive attitudes toward VR-based interventions. They emphasized the importance of cultural, geographical, and operational adaptations to reflect Israeli combat and civilian trauma and highlighted the potential of attachment-based VR environments for addressing grief and loss among evacuees. Based on extensive prior validation of BraveMind and the present expert feedback, the findings support further development and contextual adaptation of VR-based interventions for PTSD treatment in Israel.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1738000</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1738000</link>
        <title><![CDATA[Virtual humans in virtual reality: a scoping review on sociability, fidelity, and expression]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>J. K. Sangeeth Chandran</author><author>Marisa Llorens Salvador</author><author>Cathy Ennis</author>
        <description><![CDATA[IntroductionVirtual reality (VR) systems have evolved significantly over the past decade, enabling immersive experiences with enhanced realism and interactivity. This has motivated an interest in socially oriented applications. As user proxies, Virtual Humans (VHs) play essential roles in such applications. However, despite technological advancements, achieving realistic, expressive, and socially responsive VHs continues to present design and implementation challenges. In this scoping review, we present the state-of-the-art of VR VHs, examining the impact of VHs on the user experience.MethodologyWe reviewed 59 papers retrieved from five databases across three core themes: the implementation and impact of VH facial expressions, the impact of VH fidelity on the user experience, and the influence of VHs on human emotion and social engagement in VR. In addition, we categorized the methodologies of the reviewed studies, detailing the nature of participant interactions and the measurements taken to derive the results.ResultsThe synthesis of the examined studies indicates that both the social context (e.g., collaborative work vs. solo tasks) and the virtual environment (realistic office vs. fantastical world) significantly influence VH design decisions, such as the appropriate level of realism and emotional expressiveness.DiscussionOur review highlights the relation between social engagement, fidelity and expressiveness. We offer a set of guidelines for researchers and developers aimed at optimizing VH design to enhance user experience in VR.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1772411</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1772411</link>
        <title><![CDATA[Training-induced modulation of motor task performance and psychophysiological domains through augmented sensory feedback in virtual reality]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Raviraj Nataraj</author><author>Mingxiao Liu</author><author>Yu Shi</author><author>Sophie Dewil</author><author>Noam Y. Harel</author>
        <description><![CDATA[BackgroundAugmented sensory feedback (ASF) in virtual reality (VR) can enhance motor performance, yet it is not standardly employed with rehabilitation protocols and its effects on physiological and perceptual states remain underexplored.AimThis study examined how different ASF modalities—visual, haptic, and combined visual + haptic—modulate motor performance, psychophysiological responses, and user perceptions during an upper-limb VR training task.MethodsTwenty neurotypical adults controlled a virtual robotic arm via semi-isometric muscle contractions, while receiving one of four ASF training conditions: no feedback (NF), visual feedback (VF), haptic feedback (HF), or combined (multimodal) visual + haptic feedback (VHF). Improved performance (minimizing motion pathlengths and task completion time), changes in physiological signals (EEG band power, EMG amplitude, electrodermal activity, heart rate), and perceptual ratings (agency, motivation, utility) were assessed before and after each training condition.ResultsVF produced greater efficiency in motor output with improved performance (primary metrics of study) in conjunction with reduced EMG, along with increased electrodermal activity suggestive of higher arousal. VHF elicited significant post-training increases in EEG alpha and beta power. Motivation and utility ratings were significantly higher for VF and VHF compared to HF and NF, while agency ratings remained stable. Across all conditions, improved performance correlated with increased alpha power, reduced EMG and heart rate, and higher motivation and utility.ConclusionThese findings indicate that ASF modality differentially shapes motor, physiological, and perceptual responses. Future work should establish whether these responses generalize to clinical groups, such as those with neuromotor impairment. Ultimately, adaptive VR systems leveraging psychophysiological responses to optimize feedback in real time—balancing exertion, cognitive load, and engagement during rehabilitative training—may be key to accelerating gains in motor function.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1755571</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1755571</link>
        <title><![CDATA[Validating virtual reality for public speaking research and intervention: comparing anxiety, voice, and fluency responses to real and virtual audiences]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lamia Bettahi</author><author>Angélique Remacle</author><author>Michaël Schyns</author><author>Élodie Etienne</author><author>Anne-Marie Etienne</author><author>Anne-Lise Leclercq</author>
        <description><![CDATA[IntroductionPublic speaking (PS) is a widespread activity required in many personal and professional settings. This activity is known to elicit anxiety, subsequently affecting oral communication, especially voice and speech parameters. As mastering PS skills requires practice in situations that are as similar as possible to reality, virtual reality (VR) may represent a promising method for research, training and intervention in this domain. However, it is of paramount importance to first validate VR environments in their ability to reproduce authentic anxiety responses and communicative behaviors, which are often overlooked.MethodsTherefore, this study examined university students (N = 60) anxiety responses (self-reported and heart rate) as well as voice and fluency adjustments to a PS task performed either in (1) a real meeting room in front of an audience, (2) a virtual meeting room in front of an audience, and (3) the same virtual meeting room without any audience. As this last condition contained no anxious stimulus, it was included to act as a control for the anxiety induced by VR immersion. The main objective of this study was to examine the influence of the real vs. virtual nature of the audience on anxiety, voice and fluency parameters.ResultsOur results showed that the virtual audience elicited changes in anticipatory anxiety (increased heart rate and self-reported anxiety) compared to the control condition. The participant’s strong feeling of presence and lack of side effects such as cybersickness support the acceptability and usability of the virtual environment.DiscussionOur results extend previous data and support the feasibility and relevance of using VR for PS. Additionally, we describe different VR immersion profiles among participants.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1759834</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1759834</link>
        <title><![CDATA[The efficacy of virtual reality on pain and anxiety reduction during needle-related procedures in a pediatric emergency department setting: a systematic review and meta-analysis]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Jing Jing</author><author>Jin Song</author><author>Weiqing Song</author><author>Shasha Gong</author><author>Kai Yi</author><author>Jie Ren</author>
        <description><![CDATA[BackgroundNeedle-related procedures are a common source of pain, anxiety, and fear in pediatric emergency departments (ED), with negative psychological sequelae. Virtual reality (VR) has emerged as a non-pharmacological distraction tool, but its efficacy in the specific, high-stress ED setting requires further synthesis.MethodsWe conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to evaluate the efficacy of VR for managing needle-related procedural pain and psychological distress in children in the ED. We systematically searched PubMed, Embase, Cochrane Library, and Web of Science from inception until April 2024 for relevant RCTs. Standardized mean differences (SMDs) with 95% confidence intervals (CIs) were pooled using a random-effects model. The primary outcomes were self-reported or observed pain and anxiety; secondary outcomes included fear.ResultsNine RCTs involving 944 children were included. VR distraction significantly reduced procedural pain (SMD = −0.64, 95% CI: −1.05 to −0.23, I2 = 81.8%), anxiety (SMD = −0.67, 95% CI: −1.11 to −0.23, I2 = 83.1%), and fear (SMD = −0.56, 95% CI: −0.77 to −0.36, I2 = 34.1%) compared to standard care. Sensitivity analyses confirmed the robustness of these findings. High heterogeneity was observed for pain and anxiety outcomes, which may be attributed to variations in VR content (passive vs. interactive), comparator groups, and outcome measurement tools.ConclusionVR is an effective non-pharmacological intervention for alleviating needle-related procedural pain, anxiety, and fear in children within the ED. Despite significant heterogeneity, the consistent beneficial effects support its integration into clinical practice. Future research should focus on standardizing VR protocols and identifying the most effective VR modalities for different pediatric age groups.Systematic Review Registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD420251181929, identifier CRD420251181929.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1763018</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1763018</link>
        <title><![CDATA[Do virtual reality tools in vestibular rehabilitation offer advantage beyond increased practice times? A narrative review]]></title>
        <pubdate>2026-03-23T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Azriel Kaplan</author><author>Liran Kalderon</author><author>Shelly Levy-Tzedek</author><author>Yoav Gimmon</author>
        <description><![CDATA[BackgroundThe successful implementation of vestibular rehabilitation is frequently hindered by low patient adherence due to provoked symptoms and repetitive exercises. Virtual reality is increasingly deployed as a digital health solution to overcome these barriers through gamification, yet the specific active ingredient driving its clinical efficacy remains unclear.ObjectiveThis review evaluates whether the therapeutic advantage of virtual reality in vestibular rehabilitation is driven by specific technological features or by the confounding effect of longer practice durations relative to traditional methods.MethodsWe conducted a narrative review of studies published between 2010 and 2025 using PubMed, Scopus, and Google Scholar. The analysis focused on trials comparing virtual reality tools to conventional rehabilitation.ResultsAlthough intervention protocols varied substantially, studies demonstrating better outcomes (e.g., Vestibulo-Ocular Reflex gain, Dizziness Handicap Inventory, Berg Balance Test) with virtual reality consistently involved greater practice exposure in the virtual reality group. Our analysis suggests that when training duration was matched between intervention arms, virtual reality demonstrated no clinical advantage over conventional rehabilitation with one recent trial reporting better outcomes for conventional rehabilitation.ConclusionVirtual reality appears to enhance engagement in vestibular rehabilitation; however, current evidence suggests that the observed benefits could be attributed to increased practice dosage rather than unique technological effects. Future studies should standardize protocols to determine the independent contribution of virtual reality to clinical outcomes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1780961</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1780961</link>
        <title><![CDATA[Feeling “wow” in learning: the effects of virtual reality exhibition environments on emotions and learning]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chih-Yu Liu</author><author>Ming-Yuan Hung</author><author>Jih-Hsuan (Tammy) Lin</author>
        <description><![CDATA[IntroductionAs virtual reality (VR) continues to redefine educational and cultural experiences, this study explores how virtual reality (VRE) environments influence the emotion of awe, generic learning outcomes (GLOs), and well-being. Awe, an emotional response to perceptually or conceptually vast stimuli, often leads to a need for mental accommodation plays a pivotal role in museum and cultural experiences. While empirical evidence supports VR's ability to evoke awe through immersive vastness and extraordinary experiences, the specific role of VREs in this context remains underexplored.MethodTo address this gap, the current study compares two distinct VRE settings (perceptual vastness high: outdoor vs. perceptual vastness low: indoor) and examines the mediating roles of perceived vastness and the need for accommodation, both central to the awe experience.ResultsA sample of 65 participants was analyzed, revealing that the outdoor environment elicited significantly higher vastness than the indoor environment. While VREs did not directly affect GLOs or wellbeing, vastness mediated these outcomes with significant positive indirect effects.DiscussionThese findings highlight the potential of thoughtful VRE design to enhance both educational and emotional visitor experiences.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1728897</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1728897</link>
        <title><![CDATA[Enhancing safety education through empathic VR experiences: influences of first-person perspective and Victim’s background story]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vigneshkumar Chellappa</author><author>Jeffrey C. F. Ho</author><author>Yan Luximon</author>
        <description><![CDATA[IntroductionVirtual reality (VR) offers immersive opportunities to enhance safety education, yet the mechanisms through which VR fosters empathy and safety motivation are underexplored. This study examines how perspective (first-person vs. third-person) and narrative context (with vs. without a victim’s background story) influence empathy, safety motivation, and attitudes in a VR simulation of a fatal construction accident.MethodsA 2 × 2 between-subjects experiment was conducted with 160 participants who experienced a VR accident scenario under one of four conditions: first- or third-person perspective, with or without the victim’s background story. Participants completed validated and adapted measures of perceived closeness (IOS), state empathy (SES), embodiment, social presence, safety motivation, and attitudes toward construction safety. Direct, indirect, and serial mediation effects were analyzed using t-tests, ANOVA, and Hayes’ PROCESS macro.ResultsThe results showed that a victim’s background story increased perceived closeness and marginally increased empathy, with an indirect effect on safety motivation. While perspective alone did not directly influence empathy, the first-person perspective enhanced participants’ sense of embodiment, which in turn increased motivation and fostered social presence, resulting in more positive attitudes toward safety.DiscussionThe findings underscore the significance of emotionally resonant narratives and embodiment in VR training for cultivating empathy and a commitment to safety. The study’s results provide insights to inform the design of VR-based safety training programs in the construction industry, highlighting the potential benefits of narrative-driven, immersive experiences in fostering empathy and improving safety education outcomes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1756733</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1756733</link>
        <title><![CDATA[Double-low-dose CT combined with deep learning image reconstructions (DLIR) achieves coronary mixed reality data source optimization]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Guan Li</author><author>Yiju Zhou</author><author>Ling Gao</author><author>Yi Tang</author><author>Quan Liang</author><author>Bing Zhang</author>
        <description><![CDATA[IntroductionMixed reality combines the advantages of augmented reality and virtual reality technology into one image and can display the patient’s three-dimensional (3D) image in front of the user’s eyes using the main data source of coronary computed tomography (CT) data. Therefore, when acquiring a coronary mixed reality data source, the issues of radiation dose and contrast media dose must be considered.MethodsIn our study, we adopted double-low-dose CT (80 kVp, iodine delivery rate 1.2 g/s) combined with deep learning image reconstructions (DLIR).ResultsWe reduced the radiation dose by 42% and the contrast media dose by 31% while maintaining image quality. We found that the resolution of mixed reality 3D modeling software currently cannot distinguish small differences in data sources. With improvements in 3D modeling software resolution, the display of small differences in data sources will become more significant.DiscussionThese findings provide actionable directions for future research and collaborative development of coronary mixed reality content and features.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</link>
        <title><![CDATA[Editorial: New frontiers in immersive technologies: expanding the scope of telepresence, monitoring, and intervention]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Salvatore Livatino</author><author>Adam Wojciechowski</author><author>Hai-Ning Liang</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1810187</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1810187</link>
        <title><![CDATA[Correction: Empathy in action: cultivating altruism through immersive game experiences]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Samantha B. Lorenzo</author><author>Leila Okahata</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1760765</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1760765</link>
        <title><![CDATA[Refugio: a voice-driven generative virtual reality “safe place” for personalized emotion regulation – feasibility and usability study]]></title>
        <pubdate>2026-03-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pau Mora</author><author>Eugenio Ivorra</author><author>Elena Parra-Vargas</author><author>Pau Soldevila-Matías</author><author>Mariano L. Alcañiz</author>
        <description><![CDATA[IntroductionVirtual Reality (VR) has emerged as a pivotal tool for mental health interventions, paving the way for innovative approaches to emotional regulation. While the customization of therapeutic “safe places” is believed to enhance outcomes, existing applications often depend on inflexible, pre-configured content. This paper introduces Refugio, a groundbreaking VR application designed to bridge this gap through an integrated, real-time personalization pipeline powered by a self-hosted Large Language Model (LLM) and an efficient voice-to-3D generative pipeline.MethodsWe conducted a feasibility and usability study with 30 non-clinical participants to evaluate the feasibility and user acceptance of this natural language-driven approach, while preliminarily exploring its potential to support emotional outcomes and immersion while minimizing cognitive workload. We assessed usability (using the System Usability Scale - SUS), cognitive load (NASA-TLX), immersion (RJPQ), and emotional state before and after (Self-Assessment Manikin - SAM).ResultsThe study provided compelling preliminary evidence of the system’s feasibility and high user acceptance. Refugio achieved an “Excellent” mean SUS score of 86.33, alongside low reported mental demand and frustration. Participants showed strong engagement, creating 147 objects, and reported positive emotional shifts, with the majority experiencing increased valence (21 out of 30) and decreased arousal (19 out of 30).DiscussionThis study demonstrates that Refugio’s architecture is a viable and well-received method for implementing deep generative personalization. Our findings suggest that this natural language driven approach adeptly balances creative freedom with high usability and low cognitive load, thereby establishing a robust foundation for future clinical validation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1741892</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1741892</link>
        <title><![CDATA[Measuring perceived physical fidelity in virtual reality and virtual environments]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bree McEwan</author><author>Clarice Wu</author><author>Harris Yang</author><author>Michael Nixon</author>
        <description><![CDATA[As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</link>
        <title><![CDATA[Eye tracking in virtual reality for neurorehabilitation: a narrative perspective on needs, challenges, and pathways beyond game engines]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Minxin Cheng</author><author>Leanne Chukoskie</author>
        <description><![CDATA[Virtual reality (VR) systems with integrated eye tracking offer a powerful way to study and support sensorimotor and cognitive function in neurorehabilitation. Eye movements provide a high-bandwidth window onto information processing, visuomotor integration, cognitive load, and affect, while immersive VR enables more ecologically valid yet controllable tasks spanning visual exploration, movement execution, object interaction, and social exchange. This narrative review synthesizes recent work on eye tracking in VR for neurorehabilitation, focusing on three application domains: assessment, intervention, and supportive design, together with the technical and governance requirements needed to make these systems clinically meaningful and ethically responsible. We highlight how the dominant implementation pattern of integrated headsets streaming preprocessed gaze rays into game engines introduces black-box processing, frame-bound timing, and limited calibration control that pose threats to validity, reproducibility, and cross-site comparability. We review emerging workarounds, including modular architectures that decouple sensing and rendering, explicit latency benchmarking and cross-modal synchronization, adaptive and implicit calibration approaches, and privacy-by-design frameworks from digital phenotyping and metaverse healthcare. Taken together, the evidence suggests that eye-tracked VR is already capable of supporting informative assessments and promising interventions, but that realizing its full potential for neurorehabilitation will require a shift toward architectures that support transparent control over sampling, calibration, timing, and data governance, as well as handling eye tracking data as both a sensitive clinical signal and a protected form of personal data.]]></description>
      </item>
      </channel>
    </rss>