<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality</link>
        <description>RSS Feed for Frontiers in Virtual Reality | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-06T07:33:55.575+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1772574</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1772574</link>
        <title><![CDATA[Virtual Embodiment in virtual mirrors and presence: effects on adolescent self-esteem]]></title>
        <pubdate>2026-05-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Reut Kapah</author><author>Sara A. Freedman</author>
        <description><![CDATA[BackgroundRecent advancements within Virtual Reality (VR) have enabled bodily illusions which provide as sense of Virtual Embodiment. This is achieved through virtual mirrors, which enable the illusion that the body viewed in the mirror is their own. While Virtual Embodiment has been shown to affect self-esteem in adults, its impact on adolescents remains largely unknown.ObjectiveAdolescence is a critical stage of self-esteem development, marked by rapid physical and psychological changes. This study examined whether embodiment in a powerful Virtual Embodiment viewed via a virtual mirror, could enhance adolescents’ self-esteem. It also explored the moderating role of Sense of Presence, on this relationship.MethodA total of 160 adolescents were randomly assigned to experimental group or control group. The experimental group were placed in a virtual environment with a virtual mirror, in which they could see themselves as Power VE–a physically powerful Virtual Embodiment. The control group interacted with the same environment but without a mirror or physical actions. All participants completed the Rosenberg Self-Esteem Scale before and after the VR experience, along with measures of Sense of Presence and Virtual Embodiment.ResultsBoth groups demonstrated increases in self-esteem following the VR experience. Participants in the experimental group reported significantly higher levels of Sense of Presence compared to the control group. Although Sense of Presence did not significantly moderate the relationship between group assignment and self-esteem change, higher Sense of Presence was associated with a more immersive embodiment experience.ConclusionThe findings suggest that brief exposure in virtual environments may be associated with short-term changes in adolescents’ self-esteem and highlight Sense of Presence as a central experiential component of Virtual Embodiment. These insights underscore the potential of Virtual Embodiment for educational and therapeutic contexts tailored to adolescents’ developmental needs.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1769463</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1769463</link>
        <title><![CDATA[XR application development in the 21st century: a survey spanning two decades of XR developers, applications, and challenges]]></title>
        <pubdate>2026-04-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tuukka M. Takala</author><author>Ryan P. McMahan</author><author>Nayan Chawla</author><author>Ernst Kruijff</author><author>Takashi Kawai</author>
        <description><![CDATA[This article provides an in-depth analysis of how virtual and augmented reality (XR) application development has evolved since the emergence of affordable XR hardware in the mid-2010s. Based on surveying 158 XR developers about their experiences between 2003 and 2020, and additional interviews conducted in 2024, our study reveals a significant reduction in barriers to entry for creating XR applications. Although many of the technical challenges faced by developers have eased over time, testing-related difficulties remain a major hurdle in XR application development, and possibly have become more pronounced over time. Moreover, despite the availability of XR toolkits, developers still tend to build common features like graphical user interfaces and object manipulation from scratch rather than reusing existing components. In addition to documenting these trends in the post-2015 XR landscape, the article proposes strategies to address ongoing challenges, presents a ranked developer wishlist of XR toolkit features, and suggests ways to further support and empower XR developers.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1746725</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1746725</link>
        <title><![CDATA[Development of the interactive virtual laboratory for fundamental mechanics – iMechLab]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Marija Šljivak</author><author>Saša Lazović</author><author>Vladimir M. Petrović</author>
        <description><![CDATA[The extensive development of AI- and virtual reality-based technologies in recent years has opened new perspectives on educational tools, making them more immersive and accessible. Bearing in mind that students spend a significant amount of time each day engaging in online and virtual activities (e.g., social networks, computer games, etc.), it is critically important to leverage this and get them interested in science through these kinds of interactive media. This especially refers to fields of science that are traditionally considered demanding to engage in. This paper presents current progress on the development of iMechLab, an interactive virtual laboratory for fundamental mechanics. Our virtual laboratory contains five simulation modules aim at covering different aspects of fundamental problems in mechanics: (1) motion of a rectangular block on a horizontal surface under an external force and friction, (2) motion of a rectangular block down an inclined plane under an external force and friction, (3) a pendulum with adjustable length, mass, and initial angle, (4) a vertically oscillating spring-mass system with damping, and (5) projectile motion under gravity and friction. To realistically emulate system behavior, we developed our own mathematical models based on the laws of mechanics. Users can interactively set the initial parameters for each simulation and observe how the system responds. By combining user input, real-time animations and visualization, and graphical feedback (diagrams that illustrate key aspects of dynamics for selected simulations), iMechLab aims to help future users gain a deeper understanding of mechanical phenomena through an immersive virtual experience.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1781529</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1781529</link>
        <title><![CDATA[Hyperpersonal relationships in social virtual reality]]></title>
        <pubdate>2026-04-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Maura D. Corder</author><author>Brandon Kirk</author><author>Cameron W. Piercy</author>
        <description><![CDATA[Virtual reality (VR) is becoming more commonplace as a place to form and maintain relationships with others. Social VR, or platforms dedicated to social interaction in VR, offers opportunities for novel self-presentation and community building. Employing evidence from the hyperpersonal model, we use a survey with a convenience sample of 131 very active social VR users to test how their closest friendships in VR and in-person vary, revealing closer relationships with partners in VR. Additional hypothesis testing shows successful self-presentation and community belonging both positively relate to outcomes of closeness to others in social VR and enjoyment. We interpret these findings within the constraints of a highly engaged user group and the hyperpersonal model’s emphasis on sustained online interaction. Implications are discussed for relationship building in social VR, the hyperpersonal model in rich mediated contexts, and practical insights for the future of social VR.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1790470</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1790470</link>
        <title><![CDATA[Listening inside the sphere: user and practitioner perspectives on sound design in cinematic virtual reality (Cine-VR)]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hitesh Kumar Chaurasia</author><author>Manoj Majhi</author>
        <description><![CDATA[Over the years, filmmaking conventions and grammar have evolved around a fixed two-dimensional frame, and their relationship with the audience in the cinema hall. In Cinematic Virtual Reality (Cine-VR), neither of these references exists. The two-dimensional frame is replaced by a 360-degree visual space, and the audience, as a “user,” can interact and explore the spherical space independently. The freedom for users to look in any direction within the 360-degree space presents unique challenges for filmmakers. Hence, the filmmaking norms and practices, including sound design, need to be re-examined in the context of 360-degree Cine-VR. In the last decade, Scholars have studied various aspects of filmmaking, including sound design and its impact on user experience. However, there is a lack of studies that incorporate diverse users’ perceptions of sound design for Cine-VR compared to film. We conducted a convergent mixed-methods study using an online survey that combined quantitative questions (Likert scale, categorical, and multiple-choice questions) and qualitative open-ended questions. Of the total 256 participants from around the world, this study presents an analysis of a subset of participants (N = 155). Based on their level of experience with film and Cine-VR, the participants are divided into three groups: Creators (n = 73), Professional Users (n = 48), and General Users (n = 34). The statistical analysis is carried out using Microsoft Excel and Python. For the content analysis of qualitative responses, we used ATLAS.ti, along with Excel. The results show that in the absence of a 2D visual frame, there is a “functional shift” in the role of sound from “Convey Emotion” in films to “Guiding Viewers’ Attention” in Cine-VR. However, the effort put in by creators to guide viewers is not acknowledged by the audience and remains “Invisible Labor”. There is also a shift in challenges from production constraints (budget/time) to the technical nature of spatial audio and workflow. Despite this enthusiasm, participants expressed neutral views regarding the future and opportunities for sound design in Cine-VR. Overall, the findings suggest that Cine-VR is a different medium and needs a framework of sound design for an immersive experience.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1783996</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1783996</link>
        <title><![CDATA[Making futures playable: role-based virtual reality career exploration to promote student imagination and creative self-efficacy]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hanieh Khaleghian</author><author>Robert LiKamWa</author><author>Ed Finn</author>
        <description><![CDATA[As artificial intelligence and automation reshape the landscape of work, conventional educational models are proving insufficient for preparing students to navigate their future careers. This creates a critical gap in the development of essential human competencies, such as imagination and creative self-efficacy, which are vital for adaptability in a rapidly changing economy. This paper proposes and offers an initial pilot evaluation of an evidence-informed design framework to support those outcomes. We present Career XRcade: Esports Land, a VR platform built on embodied role-play, narrative, and game-based learning. A mixed-methods pilot (N=35) assessed a sequential, within-subjects use pattern (lecture → VR). For the subset of students who completed both interventions (N=13), paired-samples t-tests on consistent single-item anchors suggested the VR session produced additional gains in creative self-efficacy (p=.014, dz=0.80) and career interest (p=.006, dz=0.92). The difference for imaginative capability was directionally positive but not statistically significant. Qualitative themes confirmed that students preferred the immersive format, describing it as more tangible and effective for visualizing future careers and building confidence. We contribute a promising design framework structured around principles of embodied enactment, game-based learning (GBL), and authentic world-building, and provide preliminary evidence of its potential to add meaningful gains beyond what a traditional lecture provides.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1767312</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1767312</link>
        <title><![CDATA[Immersive digitalization of cultural heritage: a validated virtual reality museum platform using integrated 3D and 360-degree technologies]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pongpipat Saithong</author><author>Jiranai Yoddee</author>
        <description><![CDATA[IntroductionDigital transformation is vital for preserving regional cultural assets, yet many projects lack a validated, systematic framework for implementation. This study addresses this gap by developing a framework for the Isan Local VR Museum.MethodsEmploying a mixed-methods Research and Development (R&D) approach, the project utilized high-resolution 3D modeling and 360-degree panoramic imagery delivered via a responsive web platform. The system was validated through an expert quality review and a user satisfaction survey (n=157) using 5-point Likert scales.ResultsQuantitative analysis showed high efficacy, with experts rating overall quality as “Very Good” (x¯ = 4.63) and users reporting the “Highest” level of satisfaction (x¯ = 4.51). Specifically, Simulated Map Navigation (x¯ = 5.00) and immersive views (x¯ = 4.62) were the highest-rated features.DiscussionWhile the results confirm the framework’s robustness for VR museum execution, findings suggest that Device Responsiveness (x¯ = 4.37) remains an area for further technical refinement. This research establishes a crucial benchmark for the immersive digitalization and integration of regional cultural resources.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1792043</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1792043</link>
        <title><![CDATA[Long-term effects of virtual reality exposure therapy for adolescents with public speaking anxiety: a one-year follow-up of a randomised controlled trial]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pia R. Hauge</author><author>Rolf Gjestad</author><author>Philip Lindner</author><author>Tine Nordgreen</author><author>Smiti Kahlon</author>
        <description><![CDATA[IntroductionPublic Speaking Anxiety (PSA) is a highly prevalent disorder and may lead to adverse consequences. Virtual Reality Exposure Therapy (VRET) is an effective treatment for PSA in adults and adolescents, yet little is known about the long-term effects of VRET for PSA.MethodThe current study assessed the clinical effects of a two-phased, four-armed RCT study at 12-month follow-up. In the original trial, a total of 100 adolescents were randomly assigned to one of four conditions: 1). VRET-only, 2) VRET + online exposure program, 3) Online psychoeducation program + online exposure program, or 4) Waitlist + online psychoeducation program. Participants were assessed on public speaking anxiety-, social interaction anxiety- and social phobia symptoms.ResultsAt group level, online psychoeducation and online exposure had a statistically significant reduction in PSAS scores from post-treatment to 12-month follow-up, whereas the other groups maintained their treatment gains. Social interaction anxiety symptoms remained unchanged following the intervention, while the improvement in social phobia symptoms was maintained at 12-month follow-up.DiscussionThis study provides an early indication that both VRET, and online psychoeducation + online exposure, could be effective methods of reducing PSA in adolescents over a 12-month period. However, as the findings have several limitations, including small sample size, this needs to be explored further. The study was pre-registered at Clinicaltrials.gov (NCT04396392) on the 11th of February 2020.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1746499</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1746499</link>
        <title><![CDATA[Dual-layer emotion sensing in virtual reality: integrating facial action units and GPT-4o semantic mapping]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nur Syakirah Cik Zamani</author><author>Nurain Syafiqah Jannah Md Amin</author><author>Mahdiyeh Sadat Moosavi</author><author>Nur Zakiah Balqis Mohd Fauzi</author><author>Nusrat Z. Zenia</author><author>Suziah Sulaiman</author><author>Frédéric Mérienne</author>
        <description><![CDATA[IntroductionObjectively quantifying emotional responses in Virtual Reality (VR) remains a challenging for affective computing and adaptive interface design, particularly during active motor task execution.MethodsThis study presents a dual-layer emotion sensing framework that combines rule-based Facial Action Unit (AU) analysis with GPT-4 based semantic interpretation to characterize affective responses in immersive environments. As a pilot methodological validation, twenty-one healthy adults performed identical motor tasks in two VR settings (a Contentment Forest and a Horror Morgue) while facial expressions were monitored.ResultsCo-activations of 63 AUs were classified into six basic emotions; five emotions were retained for statistical analysis, revealing significant differences in emotional distributions between environments (χ2(4) = 69.92, p < 0.001). These emotion labels were subsequently reinterpreted into higher-level affective dimensions (Contentment and Anxiety).DiscussionThe dual-layer framework enables continuous, emotion profiling in VR without intrusive sensors, suggesting potential applications for future affect-aware VR applications in rehabilitation, training, and entertainment.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1735996</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1735996</link>
        <title><![CDATA[The role of embodiment in virtual Reality’s effects on hand osteoarthritis pain: a pilot study]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>David H. Vo</author><author>Michaela Sawada</author><author>Robert R. Edwards</author><author>Nancy A. Baker</author>
        <description><![CDATA[The sense of embodiment (SoE), when an individual takes bodily ownership over an artificial body part as their own, has been shown to produce analgesic effects. SoE can be facilitated through virtual reality (VR) and spherical video-based virtual reality (SVVR), where users may experience embodiment of virtual limbs. In this study we used pre-recorded embodiment-inducing SVVR as a nonpharmacological analgesic intervention for individuals with hand osteoarthritis (HOA) who experienced chronic pain. The study used a pre/post within-subjects design (N = 10) to evaluate the immediate effects of SVVR and a 3-week single-subject design (N = 2) to evaluate potential prolonged effects. Results revealed that SVVR significantly reduced pain in the pre/post study during SVVR (p = 0.02; r = 0.82) and immediately after (p = 0.01; r = 0.90). The single-subjects study demonstrated significant analgesic effects during the week of the intervention for one participant (p < 0.001), but not for the other participant (p = 0.89). Embodiment-inducing SVVR shows potential as an nonpharmacological and cost-effective way to manage pain and support functioning in people with HOA.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1751609</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1751609</link>
        <title><![CDATA[The impact of olfactory stimuli on foreign language vocabulary acquisition in an immersive virtual reality environment]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bruno Peixoto</author><author>Luciana Cabral P. Bessa</author><author>Guilherme Gonçalves</author><author>Maximino Bessa</author><author>Miguel Melo</author>
        <description><![CDATA[IntroductionImmersive virtual reality (iVR) offers a multisensory environment for education, yet the integration of olfaction remains underexplored. This study examined whether incorporating ambient olfactory stimuli into an iVR environment enhances foreign language vocabulary retention and the user’s sense of presence.MethodsA between-subjects experiment was conducted with 59 participants who learned German vocabulary in a virtual airport scenario. Participants were assigned to one of five ambient olfactory conditions systematically selected to represent distinct quadrants of the circumplex model of affect: no scent (control), spearmint (pleasant-arousing), lavender (pleasant-calming), burning wood (unpleasant-arousing), or sewage (unpleasant-calming). Vocabulary retention was measured using matching pre- and post-tests, while subjective presence was assessed using the standardised Igroup Presence Questionnaire (IPQp).ResultsThe results indicated that ambient olfactory stimulation, regardless of affective valence or arousal level, did not significantly improve immediate vocabulary retention compared to the control condition. However, scent did impact the subjective experience of presence; notably, an unpleasant, high-arousal scent (burning wood) served as a distraction, significantly reducing perceived spatial presence.DiscussionThese findings establish an important boundary condition for multisensory educational VR. They demonstrate that the simple addition of ambient, affective scents as a background stimulus is insufficient to drive immediate cognitive learning gains, and may even detract from immersion if unpleasant. Multisensory iVR design must be guided by pedagogical priorities rather than novelty alone, suggesting that relying solely on ambient emotional modulation via olfaction is not a viable strategy for complex cognitive tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1793021</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1793021</link>
        <title><![CDATA[Body ownership illusions in immersive virtual reality: implications for musculoskeletal rehabilitation]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Manca Opara Zupančič</author><author>Nejc Šarabon</author>
        <description><![CDATA[Body ownership illusions (BOIs) refer to the phenomenon in which individuals experience artificial or virtual body parts as their own. These illusions arise from the brain’s integration of multisensory input and can be reliably induced using immersive virtual reality (VR) technologies. While BOIs have been extensively studied in neurological and psychological contexts, emerging research suggests their potential applicability in musculoskeletal rehabilitation; however, the evidence base remains relatively limited. This review synthesizes findings from experimental and clinical studies on BOIs, with a focus on their relevance to pain modulation, kinesiophobia, altered body image, and their application in patient education and exercise. Key domains for the application of BOIs in musculoskeletal rehabilitation are proposed. Evidence suggests that BOIs may offer promising opportunities to modulate central mechanisms that often limit rehabilitation outcomes. By potentially updating maladaptive top-down expectations, enabling controlled exposure to feared movements, and leveraging the Proteus effect, BOIs could influence pain perception, kinesiophobia, and behaviour in ways that support recovery. Embodiment in healthy or hyper-capable avatars may contribute to improvements in body image and movement confidence, while the integration of BOIs into VR-based educational interventions may enhance emotional engagement, facilitate belief change, and increase motivation. Collectively, these proposed mechanisms suggest that BOIs have the potential to address psychosocial barriers and support adherence to rehabilitation programmes, although further clinical research is needed to confirm these effects in patient populations. Insights from this review may inform future research and guide the design of innovative, patient-centred VR rehabilitation strategies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1794720</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1794720</link>
        <title><![CDATA[Anthropomorphic AI: a toolkit for authoring and interacting with intelligent virtual agents for extended reality]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ke Li</author><author>Fariba Mostajeran</author><author>Sebastian Rings</author><author>Julia Hertel</author><author>Susanne Schmidt</author><author>Michael Arz</author><author>Frank Steinicke</author>
        <description><![CDATA[Intelligent Virtual Agents (IVAs), which embody an artificial intelligence (AI) in a humanoid representation, have enormous potential for immersive extended reality (XR) environments to enable natural and engaging human-AI interactions. With recent advances in large language models (LLMs) in simulating human-like text responses, interest in anthropomorphic embodied IVAs has grown across extended reality (XR) research and application domains. However, toolkits for authoring and interacting with IVAs in research remain sparse. Therefore, we present Anthropomorphic AI, a flexible and scalable open-source research toolkit for authoring and interacting with embodied IVAs with rich multimodal capabilities, including speech, gaze, gestures, facial expressions, and vision. Our system enables developers to create various embodied anthropomorphic IVAs by customizing behavior through expressive nonverbal cues, selecting and combining different foundation models, speech-to-text (STT) and text-to-speech (TTS) methods, and adapting the system prompt to guide interaction. We also integrate various features such as proximity detection, trajectory-based action recognition, and vision-based multimodal prompting for supporting natural human-IVA interaction in immersive XR. We evaluate the toolkit through four use case demonstrations, a pilot developer evaluation, and an pilot end-user evaluation in immersive VR, showing its capability in generating anthropomorphic IVAs for immersive XR applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1785069</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1785069</link>
        <title><![CDATA[SafeSpaceVR: development and evaluation of an open-source, clinician-guided virtual reality tool for social anxiety]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kevin Hofbauer</author><author>Joshua Juvrud</author>
        <description><![CDATA[BackgroundExisting Virtual Reality Exposure Therapy (VRET) tools are predominantly solo-user platforms that lack real-time clinical interaction, creating structural and psychological barriers to the therapeutic alliance.ObjectiveThe aim of this study was to assess the feasibility and patient-side usability of SafeSpaceVR, an open-source, asymmetric VRET architecture designed to bridge the gap between patient immersion and real-time clinician control.MethodsUtilizing a four-phase User-Centered Design (UCD) framework, we conducted formative research through a public survey (N = 33) and semi-structured clinician interviews (N = 3) to define technical requirements. These insights informed the development of a functional prototype, which was subsequently evaluated in a summative playtesting phase (N = 11) to assess usability, clarity, and perceived therapeutic relevance.ResultsFormative data emphasized the critical need for emotional doorways, such as a virtual clinician's office, and granular clinician-side control over social density. Summative testing revealed high usability and strong perceived relevance for social anxiety treatment.ConclusionsThis study demonstrates the initial feasibility of open-source, clinician-centered VR tools. As a proof of concept, SafeSpaceVR provides a foundational roadmap for remote, collaborative, and asymmetric digital mental health interventions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1787939</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1787939</link>
        <title><![CDATA[A systematic scoping review of virtual environments and tasks usable for therapist or self-guided assessment and intervention for Social Anxiety in children, adolescents, and adults]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Teresa Schmidt-Peter</author><author>Andreas Mühlberger</author><author>Theresa F. Wechsler</author>
        <description><![CDATA[BackgroundVirtual reality (VR) is a promising tool for diagnostics and treatment in Social Anxiety Disorder (SAD) and its subtypes. In VR, controlled environments for the activation of social fears and anxiety can be provided, making them valuable to measure and modify SAD. While research on both is expanding, the range of VR environments currently in use remains unclear. Therefore, this work systematically examines virtual environments and tasks for self or therapist-guided assessment and intervention of Social Anxiety in children, adolescents, and adults, focusing on technical implementation, feasibility, usability, and effectiveness in anxiety activation.MethodsWe conducted a preregistered systematic scoping review of original research published until January 2025 in English or German targeting interactive, immersive VR systems for the assessment and/or intervention in Social Anxiety. Included studies cover children, adolescents, or adults samples with a DSM/ICD diagnosis of SAD, or high Social Anxiety based on validated instruments. We extracted and summarized qualitative data on study and sample characteristics, VR environment design, technical realization, feasibility, usability, and presence. Furthermore, we descriptively summarized quantitative data on presence and anxiety activation during VR sessions.ResultsA total of 31 studies were included, eleven providing quantitative data for summary. Public speaking tasks dominated VR scenarios, while other subtypes, such as fear of blushing, were not addressed. VR scenarios for children and adolescents remained a substantial research gap. Immersive systems typically involved HMDs with head-tracking, and in part, motion- or eye-tracking. Effect sizes for anxiety activation were lacking; among the few studies reporting them, moderate to high effects were observed. Presence was sufficient to elicit anxiety responses, although social presence was underreported. Feasibility was assessed via dropout and adherence rates; usability reporting focused primarily on cybersickness, supplemented by user-friendliness, time- and cost-efficiency, and user satisfaction.ConclusionThis review presents the wide range of VR environments and tasks for assessment and intervention in Social Anxiety, with promising anxiety-activating effects. However, improvements in technical features, usability, and presence are needed. Future research should focus on expanding VR scenarios to address individual fears, enhance dialogues and social interactions, and provide applications targeting children and adolescents.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1780297</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1780297</link>
        <title><![CDATA[Using VR in conjunction with biometric sensors to assess public perception of urban biodiversity]]></title>
        <pubdate>2026-04-14T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Daniel Phillips</author><author>Garrett Farrow</author>
        <description><![CDATA[Immersive virtual reality is increasingly used to evaluate spatial and environmental design scenarios, yet most studies rely on post-experience surveys that summarize experience retrospectively and obscure how responses unfold during exposure. This study demonstrates how integrating wearable biometric sensing with immersive virtual reality can extend spatial research by capturing temporally resolved, embodied responses to environmental variation. We conducted a controlled virtual reality experiment featuring three urban spatial typologies (residential, institutional, and urban core), each containing four sequential biodiversity treatments ranging from no to high biodiversity. Heart rate and galvanic skin response were recorded continuously and analyzed as within-subject deviations from baseline, then segmented according to spatial typology and treatment. Aggregate survey responses were used to characterize reflective post-experience perceptions of biodiversity, naturalness, inviting quality, and overall liking. Results show that physiological arousal does not scale linearly with either perceived biodiversity or stated preference: moderate biodiversity conditions often elicited elevated heart rate alongside peak liking, while high biodiversity conditions sometimes produced heightened arousal accompanied by reduced liking. An illustrative within-subject comparison demonstrates how biometric data reveal experiential inflection points that are not evident from survey responses alone. Rather than treating biometric measures as validation tools, this work positions immersive virtual reality combined with biometric sensing as an embodied response instigator that complements reflective evaluation and offers new methodological possibilities for spatial design research.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1757871</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1757871</link>
        <title><![CDATA[Two limbs are better than one: multi-limb tactons for precise hand navigation]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Takashige Suzuki</author><author>Kizashi Nakano</author><author>Takuji Narumi</author><author>Hideaki Kuzuoka</author>
        <description><![CDATA[Virtual Reality (VR) training systems often rely on visual cues, which can compete for the user’s attention, particularly in high-skill domains like surgery or assembly. While vibrotactile guidance offers a non-visual alternative, current single-limb systems suffer from limited spatial resolution, restricting their directional precision for complex tasks. To overcome this limitation, we propose a multi-limb “Tacton (symbolic vibrotactile patterns)” strategy that distributes vibrotactile information across anatomically distinct limbs (the wrist and ankle). We conducted two experiments to validate this approach and determine the optimal reference frame for inter-limb coordination. Experiment 1 (N=12) evaluated the distribution strategy by comparing the directional precision of single-limb (wrist and forearm) versus multi-limb (wrist and ankle) configurations using a novel temporal pattern encoding for 32 unique directions. Results demonstrated that distributing cues significantly improved precision, reducing angular errors to under 22.5° in over 77% of trials compared to the single-limb condition. Experiment 2 (N=12) addressed the cognitive challenge of coordinating these distributed signals by comparing a body-based “Skeletal” frame with an environment-based “World” reference frame. The “World” frame, which maps cues to an allocentric coordinate system, yielded substantially faster reaction times and lower angular errors than the “Skeletal” frame, minimizing the cognitive load associated with mental rotation. We conclude that high-precision, non-visual hand guidance is best achieved by distributing symbolic haptic cues across separate limbs and mapping them to a stable, allocentric coordinate system. These findings provide foundational design principles for creating immersive, hands-free guidance systems that preserve the user’s visual-attentional resources.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1797341</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1797341</link>
        <title><![CDATA[Applications of extended reality within the shipbuilding industry: a systematic literature review]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Joni Rajamäki</author><author>Mirva Tapola</author><author>Olli Heimo</author><author>Teijo Lehtonen</author><author>Jaakko Järvi</author>
        <description><![CDATA[Extended reality includes virtual reality which places the user into a virtual world and augmented reality which adds virtual elements to the real world. Extended reality has been touted as a pivotal technology as part of industry 4.0, but has yet to make a significant impact in industrial applications. Shipbuilding is a longstanding and traditional branch of industry which is characterized as slow to innovate. The importance of shipbuilding is rising as regions like the arctic are being unlocked and as a result additional demand is placed onto shipyards. In order to better accommodate increased demands, novel means for better efficiency are welcomed within shipbuilding. This article presents a systematic literature review analyzing the research in the use of extended reality within the shipbuilding industry. The focuses of this review are on the current extent of research being conducted, how different sub technologies of extended reality overlap with different phases of shipbuilding, and how technology is evaluated and what kind of value can be derived from current research. A total of 44 articles from nine sources are reviewed. The results indicate an overall early state of research characterized by a heavy focus on pilot research. Clear use cases for extended reality solutions are identified and some instances of demonstrable value for shipbuilding operations are presented. Shortcomings in the current research and potential future directions are also outlined.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1806316</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1806316</link>
        <title><![CDATA[Saccadic undershooting in gaze generation for virtual characters]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ruoting Lian</author><author>Hironori Mitake</author><author>Shoichi Hasegawa</author>
        <description><![CDATA[Saccades are the primary eye movements used to shift gaze and play an important role in generating realistic gaze behavior for virtual characters. While many gaze generation studies focus on high-level attention or target selection, finer characteristics of human eye movements–such as saccadic undershooting–have received relatively little attention. In this paper, we propose a psychologically plausible gaze generation model that explicitly incorporates saccadic undershooting to produce more human-like gaze behavior. The model parameters were derived from eye-tracking data collected in a VR environment and integrated into a gaze generation framework for virtual characters. Quantitative evaluation using leave-one-participant-out cross-validation shows that the proposed model reproduces the undershooting patterns observed in human data and achieves lower absolute errors compared with a representative existing model and a random baseline. A subjective user study further indicates that participants can perceive differences between gaze behaviors generated by the models. Although no significant differences were found in median ratings of human-likeness, roboticness, or head–eye coordination, the bimodal rating patterns suggest that subtle variations in gaze behavior may influence users’ perceptions of virtual characters.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1730408</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1730408</link>
        <title><![CDATA[Generational differences in visual engagement: applying the visual interaction analysis (VIA) methodology composed of eye-tracking and virtual reality]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cristobal Rodolfo Guerra-Tamez</author><author>Pedro Daniel Rodríguez Sánchez</author>
        <description><![CDATA[Understanding visual engagement in marketing is crucial for optimizing user experience and enhancing campaign effectiveness. This study applies the Visual Interaction Analysis (VIA) methodology as a controlled VR-based protocol combining immersive virtual reality with eye-tracking to quantify visual attention allocation and examine generational and gender differences in engagement with advertising materials. Using the Cognitive3D platform, fixation-based metrics—Total Fixations (TF), Total Duration of Fixations (TDF), and Average Duration per Fixation (ADF)—were analyzed for Millennials and Generation Z participants. A total of 82 participants were recruited; inferential analyses were conducted on the valid eye-tracking subsample (N = 44; 22 Millennials, 22 Gen Z). Results revealed cohort differences in visual attention allocation for this poster stimulus under controlled VR free-viewing conditions: Generation Z exhibited higher fixation counts and longer total fixation durations than Millennials. In contrast, average fixation duration did not differ between generations, suggesting comparable moment-to-moment attentional processing. Within Generation Z, female participants showed longer average fixation durations than males. These findings demonstrate how fixation-based eye-tracking in controlled VR environments can provide actionable diagnostics of visual attention distribution, supporting pre-launch optimization of marketing and communication materials.]]></description>
      </item>
      </channel>
    </rss>