<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality</link>
        <description>RSS Feed for Frontiers in Virtual Reality | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-06T20:57:50.766+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1743641</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1743641</link>
        <title><![CDATA[Implementing IVR for family therapy training: a prototype for first family therapy sessions]]></title>
        <pubdate>2026-03-30T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>M. Everri</author><author>A. C. Queiroz</author><author>M. Heitmayer</author><author>A. Campbell</author><author>F. Balestra</author><author>D. Hanley</author><author>L. Fruggeri</author><author>J. Jacobson</author><author>M. Messena</author><author>T. Merrick</author><author>V. O’Brien</author><author>D. S. Rait</author><author>A. Roberts</author><author>S. Thaker</author>
        <description><![CDATA[Role plays and live supervision have been core methods in family therapy education, offering trainees experiential opportunities to practice therapeutic techniques, engage in reflexivity, and develop systemic awareness. However, these traditional methods face limitations in scalability, standardization, and emotional safety. Immersive Virtual Reality (IVR) —a technology capable of eliciting realistic affective and cognitive responses through a sense of presence—presents new possibilities for addressing these challenges. Drawing upon research in simulation-based learning, this article explores how IVR can enhance the acquisition of core family therapy competencies (technical skills, relational, epistemological, and context sensitivity). The paper synthesizes existing family therapy education models and methods and IVR-based training research. It highlights the unique pedagogical affordances of IVR, i.e., embodied perspective-taking, emotional safety, standardization, and repeatability, and links these to family therapy training goals. An IVR prototype developed by the authors simulates a first family therapy session, providing a proof of concept for integrating virtual simulations into therapist education. Preliminary feedback from professionals indicates that IVR can foster engagement and self-reflexivity, though challenges remain regarding content realism, cost, and trainers’ digital skills. The article concludes by identifying future directions for research and practice, emphasizing the need for interdisciplinary collaboration, empirical validation, and ethical frameworks to guide the responsible implementation of IVR in family therapy education.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1693453</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1693453</link>
        <title><![CDATA[Development and adaptation of virtual reality tools for PTSD and place-attachment treatment in Israel after the October 7th attacks]]></title>
        <pubdate>2026-03-27T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Ehud Bodner</author><author>Ilan Vol</author><author>Dotan Bar-Natan</author><author>Albert Rizzo</author><author>Mario Mikulincer</author><author>Shachar Maidenbaum</author>
        <description><![CDATA[The October 7 attacks caused significant psychological trauma among Israeli soldiers and civilians. Virtual reality (VR) has shown promise in PTSD treatment, particularly through BraveMind, a validated Prolonged Exposure (PE) VR system developed for U.S. veterans. This formative study examined how existing VR tools can be adapted to the Israeli context and extended beyond exposure to include place-attachment-based therapy for displaced civilians. Seven experienced clinicians participated in a focus-group evaluation of two systems: the original U.S.-based BraveMind and a newly developed place-attachment VR prototype (Re-PAVeR). Participants individually experienced brief VR exposures and completed questionnaires assessing presence, user experience, and perceived clinical utility, followed by a structured group discussion. Clinicians reported a strong sense of presence and positive attitudes toward VR-based interventions. They emphasized the importance of cultural, geographical, and operational adaptations to reflect Israeli combat and civilian trauma and highlighted the potential of attachment-based VR environments for addressing grief and loss among evacuees. Based on extensive prior validation of BraveMind and the present expert feedback, the findings support further development and contextual adaptation of VR-based interventions for PTSD treatment in Israel.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1738000</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1738000</link>
        <title><![CDATA[Virtual humans in virtual reality: a scoping review on sociability, fidelity, and expression]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>J. K. Sangeeth Chandran</author><author>Marisa Llorens Salvador</author><author>Cathy Ennis</author>
        <description><![CDATA[IntroductionVirtual reality (VR) systems have evolved significantly over the past decade, enabling immersive experiences with enhanced realism and interactivity. This has motivated an interest in socially oriented applications. As user proxies, Virtual Humans (VHs) play essential roles in such applications. However, despite technological advancements, achieving realistic, expressive, and socially responsive VHs continues to present design and implementation challenges. In this scoping review, we present the state-of-the-art of VR VHs, examining the impact of VHs on the user experience.MethodologyWe reviewed 59 papers retrieved from five databases across three core themes: the implementation and impact of VH facial expressions, the impact of VH fidelity on the user experience, and the influence of VHs on human emotion and social engagement in VR. In addition, we categorized the methodologies of the reviewed studies, detailing the nature of participant interactions and the measurements taken to derive the results.ResultsThe synthesis of the examined studies indicates that both the social context (e.g., collaborative work vs. solo tasks) and the virtual environment (realistic office vs. fantastical world) significantly influence VH design decisions, such as the appropriate level of realism and emotional expressiveness.DiscussionOur review highlights the relation between social engagement, fidelity and expressiveness. We offer a set of guidelines for researchers and developers aimed at optimizing VH design to enhance user experience in VR.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1772411</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1772411</link>
        <title><![CDATA[Training-induced modulation of motor task performance and psychophysiological domains through augmented sensory feedback in virtual reality]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Raviraj Nataraj</author><author>Mingxiao Liu</author><author>Yu Shi</author><author>Sophie Dewil</author><author>Noam Y. Harel</author>
        <description><![CDATA[BackgroundAugmented sensory feedback (ASF) in virtual reality (VR) can enhance motor performance, yet it is not standardly employed with rehabilitation protocols and its effects on physiological and perceptual states remain underexplored.AimThis study examined how different ASF modalities—visual, haptic, and combined visual + haptic—modulate motor performance, psychophysiological responses, and user perceptions during an upper-limb VR training task.MethodsTwenty neurotypical adults controlled a virtual robotic arm via semi-isometric muscle contractions, while receiving one of four ASF training conditions: no feedback (NF), visual feedback (VF), haptic feedback (HF), or combined (multimodal) visual + haptic feedback (VHF). Improved performance (minimizing motion pathlengths and task completion time), changes in physiological signals (EEG band power, EMG amplitude, electrodermal activity, heart rate), and perceptual ratings (agency, motivation, utility) were assessed before and after each training condition.ResultsVF produced greater efficiency in motor output with improved performance (primary metrics of study) in conjunction with reduced EMG, along with increased electrodermal activity suggestive of higher arousal. VHF elicited significant post-training increases in EEG alpha and beta power. Motivation and utility ratings were significantly higher for VF and VHF compared to HF and NF, while agency ratings remained stable. Across all conditions, improved performance correlated with increased alpha power, reduced EMG and heart rate, and higher motivation and utility.ConclusionThese findings indicate that ASF modality differentially shapes motor, physiological, and perceptual responses. Future work should establish whether these responses generalize to clinical groups, such as those with neuromotor impairment. Ultimately, adaptive VR systems leveraging psychophysiological responses to optimize feedback in real time—balancing exertion, cognitive load, and engagement during rehabilitative training—may be key to accelerating gains in motor function.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1759834</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1759834</link>
        <title><![CDATA[The efficacy of virtual reality on pain and anxiety reduction during needle-related procedures in a pediatric emergency department setting: a systematic review and meta-analysis]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Jing Jing</author><author>Jin Song</author><author>Weiqing Song</author><author>Shasha Gong</author><author>Kai Yi</author><author>Jie Ren</author>
        <description><![CDATA[BackgroundNeedle-related procedures are a common source of pain, anxiety, and fear in pediatric emergency departments (ED), with negative psychological sequelae. Virtual reality (VR) has emerged as a non-pharmacological distraction tool, but its efficacy in the specific, high-stress ED setting requires further synthesis.MethodsWe conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to evaluate the efficacy of VR for managing needle-related procedural pain and psychological distress in children in the ED. We systematically searched PubMed, Embase, Cochrane Library, and Web of Science from inception until April 2024 for relevant RCTs. Standardized mean differences (SMDs) with 95% confidence intervals (CIs) were pooled using a random-effects model. The primary outcomes were self-reported or observed pain and anxiety; secondary outcomes included fear.ResultsNine RCTs involving 944 children were included. VR distraction significantly reduced procedural pain (SMD = −0.64, 95% CI: −1.05 to −0.23, I2 = 81.8%), anxiety (SMD = −0.67, 95% CI: −1.11 to −0.23, I2 = 83.1%), and fear (SMD = −0.56, 95% CI: −0.77 to −0.36, I2 = 34.1%) compared to standard care. Sensitivity analyses confirmed the robustness of these findings. High heterogeneity was observed for pain and anxiety outcomes, which may be attributed to variations in VR content (passive vs. interactive), comparator groups, and outcome measurement tools.ConclusionVR is an effective non-pharmacological intervention for alleviating needle-related procedural pain, anxiety, and fear in children within the ED. Despite significant heterogeneity, the consistent beneficial effects support its integration into clinical practice. Future research should focus on standardizing VR protocols and identifying the most effective VR modalities for different pediatric age groups.Systematic Review Registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD420251181929, identifier CRD420251181929.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1755571</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1755571</link>
        <title><![CDATA[Validating virtual reality for public speaking research and intervention: comparing anxiety, voice, and fluency responses to real and virtual audiences]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lamia Bettahi</author><author>Angélique Remacle</author><author>Michaël Schyns</author><author>Élodie Etienne</author><author>Anne-Marie Etienne</author><author>Anne-Lise Leclercq</author>
        <description><![CDATA[IntroductionPublic speaking (PS) is a widespread activity required in many personal and professional settings. This activity is known to elicit anxiety, subsequently affecting oral communication, especially voice and speech parameters. As mastering PS skills requires practice in situations that are as similar as possible to reality, virtual reality (VR) may represent a promising method for research, training and intervention in this domain. However, it is of paramount importance to first validate VR environments in their ability to reproduce authentic anxiety responses and communicative behaviors, which are often overlooked.MethodsTherefore, this study examined university students (N = 60) anxiety responses (self-reported and heart rate) as well as voice and fluency adjustments to a PS task performed either in (1) a real meeting room in front of an audience, (2) a virtual meeting room in front of an audience, and (3) the same virtual meeting room without any audience. As this last condition contained no anxious stimulus, it was included to act as a control for the anxiety induced by VR immersion. The main objective of this study was to examine the influence of the real vs. virtual nature of the audience on anxiety, voice and fluency parameters.ResultsOur results showed that the virtual audience elicited changes in anticipatory anxiety (increased heart rate and self-reported anxiety) compared to the control condition. The participant’s strong feeling of presence and lack of side effects such as cybersickness support the acceptability and usability of the virtual environment.DiscussionOur results extend previous data and support the feasibility and relevance of using VR for PS. Additionally, we describe different VR immersion profiles among participants.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1763018</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1763018</link>
        <title><![CDATA[Do virtual reality tools in vestibular rehabilitation offer advantage beyond increased practice times? A narrative review]]></title>
        <pubdate>2026-03-23T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Azriel Kaplan</author><author>Liran Kalderon</author><author>Shelly Levy-Tzedek</author><author>Yoav Gimmon</author>
        <description><![CDATA[BackgroundThe successful implementation of vestibular rehabilitation is frequently hindered by low patient adherence due to provoked symptoms and repetitive exercises. Virtual reality is increasingly deployed as a digital health solution to overcome these barriers through gamification, yet the specific active ingredient driving its clinical efficacy remains unclear.ObjectiveThis review evaluates whether the therapeutic advantage of virtual reality in vestibular rehabilitation is driven by specific technological features or by the confounding effect of longer practice durations relative to traditional methods.MethodsWe conducted a narrative review of studies published between 2010 and 2025 using PubMed, Scopus, and Google Scholar. The analysis focused on trials comparing virtual reality tools to conventional rehabilitation.ResultsAlthough intervention protocols varied substantially, studies demonstrating better outcomes (e.g., Vestibulo-Ocular Reflex gain, Dizziness Handicap Inventory, Berg Balance Test) with virtual reality consistently involved greater practice exposure in the virtual reality group. Our analysis suggests that when training duration was matched between intervention arms, virtual reality demonstrated no clinical advantage over conventional rehabilitation with one recent trial reporting better outcomes for conventional rehabilitation.ConclusionVirtual reality appears to enhance engagement in vestibular rehabilitation; however, current evidence suggests that the observed benefits could be attributed to increased practice dosage rather than unique technological effects. Future studies should standardize protocols to determine the independent contribution of virtual reality to clinical outcomes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1780961</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1780961</link>
        <title><![CDATA[Feeling “wow” in learning: the effects of virtual reality exhibition environments on emotions and learning]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chih-Yu Liu</author><author>Ming-Yuan Hung</author><author>Jih-Hsuan (Tammy) Lin</author>
        <description><![CDATA[IntroductionAs virtual reality (VR) continues to redefine educational and cultural experiences, this study explores how virtual reality (VRE) environments influence the emotion of awe, generic learning outcomes (GLOs), and well-being. Awe, an emotional response to perceptually or conceptually vast stimuli, often leads to a need for mental accommodation plays a pivotal role in museum and cultural experiences. While empirical evidence supports VR's ability to evoke awe through immersive vastness and extraordinary experiences, the specific role of VREs in this context remains underexplored.MethodTo address this gap, the current study compares two distinct VRE settings (perceptual vastness high: outdoor vs. perceptual vastness low: indoor) and examines the mediating roles of perceived vastness and the need for accommodation, both central to the awe experience.ResultsA sample of 65 participants was analyzed, revealing that the outdoor environment elicited significantly higher vastness than the indoor environment. While VREs did not directly affect GLOs or wellbeing, vastness mediated these outcomes with significant positive indirect effects.DiscussionThese findings highlight the potential of thoughtful VRE design to enhance both educational and emotional visitor experiences.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1728897</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1728897</link>
        <title><![CDATA[Enhancing safety education through empathic VR experiences: influences of first-person perspective and Victim’s background story]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vigneshkumar Chellappa</author><author>Jeffrey C. F. Ho</author><author>Yan Luximon</author>
        <description><![CDATA[IntroductionVirtual reality (VR) offers immersive opportunities to enhance safety education, yet the mechanisms through which VR fosters empathy and safety motivation are underexplored. This study examines how perspective (first-person vs. third-person) and narrative context (with vs. without a victim’s background story) influence empathy, safety motivation, and attitudes in a VR simulation of a fatal construction accident.MethodsA 2 × 2 between-subjects experiment was conducted with 160 participants who experienced a VR accident scenario under one of four conditions: first- or third-person perspective, with or without the victim’s background story. Participants completed validated and adapted measures of perceived closeness (IOS), state empathy (SES), embodiment, social presence, safety motivation, and attitudes toward construction safety. Direct, indirect, and serial mediation effects were analyzed using t-tests, ANOVA, and Hayes’ PROCESS macro.ResultsThe results showed that a victim’s background story increased perceived closeness and marginally increased empathy, with an indirect effect on safety motivation. While perspective alone did not directly influence empathy, the first-person perspective enhanced participants’ sense of embodiment, which in turn increased motivation and fostered social presence, resulting in more positive attitudes toward safety.DiscussionThe findings underscore the significance of emotionally resonant narratives and embodiment in VR training for cultivating empathy and a commitment to safety. The study’s results provide insights to inform the design of VR-based safety training programs in the construction industry, highlighting the potential benefits of narrative-driven, immersive experiences in fostering empathy and improving safety education outcomes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1756733</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1756733</link>
        <title><![CDATA[Double-low-dose CT combined with deep learning image reconstructions (DLIR) achieves coronary mixed reality data source optimization]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Guan Li</author><author>Yiju Zhou</author><author>Ling Gao</author><author>Yi Tang</author><author>Quan Liang</author><author>Bing Zhang</author>
        <description><![CDATA[IntroductionMixed reality combines the advantages of augmented reality and virtual reality technology into one image and can display the patient’s three-dimensional (3D) image in front of the user’s eyes using the main data source of coronary computed tomography (CT) data. Therefore, when acquiring a coronary mixed reality data source, the issues of radiation dose and contrast media dose must be considered.MethodsIn our study, we adopted double-low-dose CT (80 kVp, iodine delivery rate 1.2 g/s) combined with deep learning image reconstructions (DLIR).ResultsWe reduced the radiation dose by 42% and the contrast media dose by 31% while maintaining image quality. We found that the resolution of mixed reality 3D modeling software currently cannot distinguish small differences in data sources. With improvements in 3D modeling software resolution, the display of small differences in data sources will become more significant.DiscussionThese findings provide actionable directions for future research and collaborative development of coronary mixed reality content and features.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1810187</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1810187</link>
        <title><![CDATA[Correction: Empathy in action: cultivating altruism through immersive game experiences]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Samantha B. Lorenzo</author><author>Leila Okahata</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</link>
        <title><![CDATA[Editorial: New frontiers in immersive technologies: expanding the scope of telepresence, monitoring, and intervention]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Salvatore Livatino</author><author>Adam Wojciechowski</author><author>Hai-Ning Liang</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1760765</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1760765</link>
        <title><![CDATA[Refugio: a voice-driven generative virtual reality “safe place” for personalized emotion regulation – feasibility and usability study]]></title>
        <pubdate>2026-03-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pau Mora</author><author>Eugenio Ivorra</author><author>Elena Parra-Vargas</author><author>Pau Soldevila-Matías</author><author>Mariano L. Alcañiz</author>
        <description><![CDATA[IntroductionVirtual Reality (VR) has emerged as a pivotal tool for mental health interventions, paving the way for innovative approaches to emotional regulation. While the customization of therapeutic “safe places” is believed to enhance outcomes, existing applications often depend on inflexible, pre-configured content. This paper introduces Refugio, a groundbreaking VR application designed to bridge this gap through an integrated, real-time personalization pipeline powered by a self-hosted Large Language Model (LLM) and an efficient voice-to-3D generative pipeline.MethodsWe conducted a feasibility and usability study with 30 non-clinical participants to evaluate the feasibility and user acceptance of this natural language-driven approach, while preliminarily exploring its potential to support emotional outcomes and immersion while minimizing cognitive workload. We assessed usability (using the System Usability Scale - SUS), cognitive load (NASA-TLX), immersion (RJPQ), and emotional state before and after (Self-Assessment Manikin - SAM).ResultsThe study provided compelling preliminary evidence of the system’s feasibility and high user acceptance. Refugio achieved an “Excellent” mean SUS score of 86.33, alongside low reported mental demand and frustration. Participants showed strong engagement, creating 147 objects, and reported positive emotional shifts, with the majority experiencing increased valence (21 out of 30) and decreased arousal (19 out of 30).DiscussionThis study demonstrates that Refugio’s architecture is a viable and well-received method for implementing deep generative personalization. Our findings suggest that this natural language driven approach adeptly balances creative freedom with high usability and low cognitive load, thereby establishing a robust foundation for future clinical validation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1741892</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1741892</link>
        <title><![CDATA[Measuring perceived physical fidelity in virtual reality and virtual environments]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bree McEwan</author><author>Clarice Wu</author><author>Harris Yang</author><author>Michael Nixon</author>
        <description><![CDATA[As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</link>
        <title><![CDATA[Eye tracking in virtual reality for neurorehabilitation: a narrative perspective on needs, challenges, and pathways beyond game engines]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Minxin Cheng</author><author>Leanne Chukoskie</author>
        <description><![CDATA[Virtual reality (VR) systems with integrated eye tracking offer a powerful way to study and support sensorimotor and cognitive function in neurorehabilitation. Eye movements provide a high-bandwidth window onto information processing, visuomotor integration, cognitive load, and affect, while immersive VR enables more ecologically valid yet controllable tasks spanning visual exploration, movement execution, object interaction, and social exchange. This narrative review synthesizes recent work on eye tracking in VR for neurorehabilitation, focusing on three application domains: assessment, intervention, and supportive design, together with the technical and governance requirements needed to make these systems clinically meaningful and ethically responsible. We highlight how the dominant implementation pattern of integrated headsets streaming preprocessed gaze rays into game engines introduces black-box processing, frame-bound timing, and limited calibration control that pose threats to validity, reproducibility, and cross-site comparability. We review emerging workarounds, including modular architectures that decouple sensing and rendering, explicit latency benchmarking and cross-modal synchronization, adaptive and implicit calibration approaches, and privacy-by-design frameworks from digital phenotyping and metaverse healthcare. Taken together, the evidence suggests that eye-tracked VR is already capable of supporting informative assessments and promising interventions, but that realizing its full potential for neurorehabilitation will require a shift toward architectures that support transparent control over sampling, calibration, timing, and data governance, as well as handling eye tracking data as both a sensitive clinical signal and a protected form of personal data.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1696677</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1696677</link>
        <title><![CDATA[Diegetic and object-based spatial audio in cinematic VR: a PRISMA-Guided systematic review with a functional taxonomy and validation framework]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Vimala Perumal</author><author>Zeeshan Jawed Shah</author>
        <description><![CDATA[IntroductionCinematic VR (CVR) removes the director’s frame, creating the challenge of guiding audience attention without breaking immersion. This systematic review synthesizes empirical evidence on two audio modalities with strong potential to function as narrative agents—diegetic audio (sounds from within the story world) and object-based spatial audio (discrete sound “objects” rendered with positional metadata)—to clarify how they guide attention, shape affect and presence, and how these effects are validated.MethodsFollowing Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) 2020, searches in IEEE Xplore, ACM Digital Library, Scopus, and Web of Science (last searched June 2025) identified studies using diegetic and/or object‐based spatial audio as narrative devices in CVR with empirical user data; non-diegetic‐only or purely technical papers without user measures were excluded (except where used as baselines). We conducted a qualitative synthesis across behavioral (head/eye tracking), subjective (presence/engagement), and physiological (HR/EMG/EDA/PPG) measures; no protocol registration was performed.ResultsEighteen studies met inclusion criteria. Across studies, world-locked, off-screen diegetic cues were repeatedly reported to redirect gaze and shorten time-to-region-of-interest after cuts, while object-based rendering enabled precise, dynamic cue placement and was commonly associated with higher presence/immersion and affective arousal relative to non-spatial or head-locked baselines.DiscussionEvidence remains constrained by methodological heterogeneity, small-to-moderate samples, inconsistent reporting, and limited direct measures of narrative comprehension. We contribute (i) a functional taxonomy of CVR narrative audio techniques aligned to diegetic/object-based practice, (ii) a Validation Triangulation Framework integrating behavioral, subjective, and physiological evidence, and (iii) a Minimum Reporting & Sharing Standard for CVR Narrative Audio specifying what to report and how to share data/metadata, aligned with PRISMA guidance and Findable, Accessible, Interoperable, Reusable (FAIR) principles.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1706034</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1706034</link>
        <title><![CDATA[Effects of central and peripheral Go/No-Go signals on reactions to peripheral stimuli in virtual reality]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Dan Bürger</author><author>Katalin Altrogge</author><author>Florian Heilmann</author><author>Stefan Pastel</author><author>Kerstin Witte</author>
        <description><![CDATA[IntroductionReactions and their inhibition are crucial in many sports. Visual reaction times (RTs) typically increase with eccentricity, and inhibition tasks produce slower RTs than simple reaction tasks. In sports, responses to peripheral stimuli often depend on an additional signal. This randomized within-subject study examined how the position of additional Go/No-Go-signals affects RTs to peripheral stimuli.MethodHealthy adults (n = 30; eligibility: no red-green blindness, no motor restrictions) performed a virtual reality reaction task to stimuli at varying eccentricities in three conditions using a virtual reaction wall. In the no-signal condition, participants reacted to illuminating buttons. In the two Go/No-Go conditions, an additional signal (green = Go, red = No-Go) appeared either centrally (central-signal condition) or peripherally (45° horizontally left and right; peripheral-signal condition). The primary outcome was RT to Go stimuli. Condition order was randomized for each participant using a computer-generated allocation.ResultsThree participants were excluded due to high error rates. Data from the remaining 27 participants showed that RTs increased with eccentricity. Reactions were fastest in the no-signal condition, followed by the central-signal condition, and slowest in the peripheral-signal condition. No evidence for differences in the slope of the eccentricity-related RT increase was observed.ConclusionThis virtual reality setup replicated known real-world findings: RTs increase with eccentricity and are slower in inhibition tasks. The position of a Go/No-Go-signal influences absolute RTs, but no evidence for faciliatory or limiting influence of attentional shifts induced by the additional signals was found.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1737482</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1737482</link>
        <title><![CDATA[Gamified virtual reality in post-stroke neurorehabilitation: a systematic review]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Alin Moldoveanu</author><author>Iulia Cristina Stănică</author><author>Andrei Cristian Lambru</author><author>Ana Magdalena Anghel</author><author>Silviu Stăncioiu</author><author>Anca Morar</author><author>Victor Asavei</author><author>Robert-Gabriel Lupu</author><author>Delia Cinteză</author>
        <description><![CDATA[IntroductionHuge and increasing global rehabilitation needs, affecting more than 2.4 billion people, require the mainstream adoption of the latest innovative technological solutions, including but not limited to virtual reality (VR), robotics, artificial intelligence (AI), or games. This systematic review focuses on the use and potential of gamified VR in post-stroke neurorehabilitation to effectively address patient engagement and motivation, scalability, costs, and the scarcity of specialists.MethodsOur review, conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, identified 4,856 records narrowed down to 66 key studies from five major databases. These studies were categorized and analyzed based on the technologies used, game mechanics, gamification techniques, adaptation to the user, and evaluation procedure.ResultsOur findings draw significant conclusions for each of these aspects and ultimately highlight the potential of gamified VR to become a mainstream neurorehabilitation technology for post-stroke patients, scalable to societal needs and improving recovery outcomes. Our review concludes with a discussion of the future directions and implications for clinical practice in neurorehabilitation using gamified VR technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1743491</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1743491</link>
        <title><![CDATA[Motion-based user identification across XR and metaverse applications by deep classification and similarity learning]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lukas Schach</author><author>Christian Rack</author><author>Ryan P. McMahan</author><author>Marc Erich Latoschik</author>
        <description><![CDATA[This paper examines the generalization capacity of two state-of-the-art classification and similarity learning models in reliably identifying users from their motion patterns across diverse eXtended Reality (XR) applications. We introduce a novel dataset comprising motion data from 49 users in five XR applications: four XR games with distinct task and action profiles, and one social XR application without predefined tasks. Using this dataset, we evaluate both models’ identification performance and, in particular, their ability to generalize across applications. Our results show that while the models can accurately identify individuals within the same application, their cross-application performance remains limited. Accordingly, recent approaches to biometric motion-based verification and identification exhibit low generalization capacity. While the results suggest that current risks of unintended or privacy-critical user identification in XR and Metaverse contexts are limited, they also indicate that these risks are likely to grow rapidly as model generalization improves. To support reproducibility and encourage further research on motion-based user identification in typical Metaverse use cases, we release our cross-application XR motion dataset and accompanying code publicly.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1737227</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1737227</link>
        <title><![CDATA[Effects of augmented and virtual reality-based interventions on social behaviors in children with ADHD: a narrative systematic review]]></title>
        <pubdate>2026-03-05T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Hatice Yalçın</author><author>Nilgün Sarp</author>
        <description><![CDATA[IntroductionDifficulties in social skills and social behavior are among the core functional impairments experienced by children with Attention Deficit Hyperactivity Disorder (ADHD). Augmented Reality (AR) and Virtual Reality (VR) technologies have gained increasing attention as innovative intervention tools to support social development in this population. However, evidence regarding their effectiveness remains fragmented and methodologically heterogeneous.MethodsThis narrative systematic review synthesized empirical evidence on AR- and VR-based interventions for social skills and behaviors in children and adolescents with ADHD. Following PRISMA 2020 guidelines, a comprehensive literature search was conducted across Web of Science, Scopus, ERIC, PsycINFO, and Google Scholar for studies published between 2015 and 2025. Studies were screened using predefined inclusion and exclusion criteria based on the PICO framework. Eligible studies included quantitative experimental or quasi-experimental designs with standardized quantitative measures of social outcomes. Risk of bias was assessed using appropriate methodological appraisal tools, and findings were synthesized narratively due to heterogeneity.ResultsA total of 148 records were identified, of which 28 studies met the inclusion criteria. The studies demonstrated considerable variability in sample characteristics, intervention duration, technological modality (AR, VR, or hybrid systems), and outcome measures. VR interventions were more frequently studied and showed consistent short-term benefits, while AR approaches demonstrated contextualized and ecologically valid applications. Hybrid AR-VR interventions appeared promising for engagement and skill generalization, but evidence remains limited due to small sample sizes and heterogeneous designs.DiscussionFindings suggest that AR and VR technologies may provide beneficial and engaging tools for supporting social development in children with ADHD. Nevertheless, conclusions regarding the relative effectiveness of specific modalities should be interpreted cautiously. Future research should prioritize well-designed randomized controlled trials, standardized outcome measures, and direct comparisons between AR, VR, and hybrid interventions to strengthen the evidence base and inform clinical practice.]]></description>
      </item>
      </channel>
    </rss>