<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | Technologies for VR section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality/sections/technologies-for-vr</link>
        <description>RSS Feed for Technologies for VR section in the Frontiers in Virtual Reality journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-13T18:05:24.37+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1769463</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1769463</link>
        <title><![CDATA[XR application development in the 21st century: a survey spanning two decades of XR developers, applications, and challenges]]></title>
        <pubdate>2026-04-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tuukka M. Takala</author><author>Ryan P. McMahan</author><author>Nayan Chawla</author><author>Ernst Kruijff</author><author>Takashi Kawai</author>
        <description><![CDATA[This article provides an in-depth analysis of how virtual and augmented reality (XR) application development has evolved since the emergence of affordable XR hardware in the mid-2010s. Based on surveying 158 XR developers about their experiences between 2003 and 2020, and additional interviews conducted in 2024, our study reveals a significant reduction in barriers to entry for creating XR applications. Although many of the technical challenges faced by developers have eased over time, testing-related difficulties remain a major hurdle in XR application development, and possibly have become more pronounced over time. Moreover, despite the availability of XR toolkits, developers still tend to build common features like graphical user interfaces and object manipulation from scratch rather than reusing existing components. In addition to documenting these trends in the post-2015 XR landscape, the article proposes strategies to address ongoing challenges, presents a ranked developer wishlist of XR toolkit features, and suggests ways to further support and empower XR developers.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1746725</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1746725</link>
        <title><![CDATA[Development of the interactive virtual laboratory for fundamental mechanics – iMechLab]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Marija Šljivak</author><author>Saša Lazović</author><author>Vladimir M. Petrović</author>
        <description><![CDATA[The extensive development of AI- and virtual reality-based technologies in recent years has opened new perspectives on educational tools, making them more immersive and accessible. Bearing in mind that students spend a significant amount of time each day engaging in online and virtual activities (e.g., social networks, computer games, etc.), it is critically important to leverage this and get them interested in science through these kinds of interactive media. This especially refers to fields of science that are traditionally considered demanding to engage in. This paper presents current progress on the development of iMechLab, an interactive virtual laboratory for fundamental mechanics. Our virtual laboratory contains five simulation modules aim at covering different aspects of fundamental problems in mechanics: (1) motion of a rectangular block on a horizontal surface under an external force and friction, (2) motion of a rectangular block down an inclined plane under an external force and friction, (3) a pendulum with adjustable length, mass, and initial angle, (4) a vertically oscillating spring-mass system with damping, and (5) projectile motion under gravity and friction. To realistically emulate system behavior, we developed our own mathematical models based on the laws of mechanics. Users can interactively set the initial parameters for each simulation and observe how the system responds. By combining user input, real-time animations and visualization, and graphical feedback (diagrams that illustrate key aspects of dynamics for selected simulations), iMechLab aims to help future users gain a deeper understanding of mechanical phenomena through an immersive virtual experience.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1767312</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1767312</link>
        <title><![CDATA[Immersive digitalization of cultural heritage: a validated virtual reality museum platform using integrated 3D and 360-degree technologies]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pongpipat Saithong</author><author>Jiranai Yoddee</author>
        <description><![CDATA[IntroductionDigital transformation is vital for preserving regional cultural assets, yet many projects lack a validated, systematic framework for implementation. This study addresses this gap by developing a framework for the Isan Local VR Museum.MethodsEmploying a mixed-methods Research and Development (R&D) approach, the project utilized high-resolution 3D modeling and 360-degree panoramic imagery delivered via a responsive web platform. The system was validated through an expert quality review and a user satisfaction survey (n=157) using 5-point Likert scales.ResultsQuantitative analysis showed high efficacy, with experts rating overall quality as “Very Good” (x¯ = 4.63) and users reporting the “Highest” level of satisfaction (x¯ = 4.51). Specifically, Simulated Map Navigation (x¯ = 5.00) and immersive views (x¯ = 4.62) were the highest-rated features.DiscussionWhile the results confirm the framework’s robustness for VR museum execution, findings suggest that Device Responsiveness (x¯ = 4.37) remains an area for further technical refinement. This research establishes a crucial benchmark for the immersive digitalization and integration of regional cultural resources.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1794720</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1794720</link>
        <title><![CDATA[Anthropomorphic AI: a toolkit for authoring and interacting with intelligent virtual agents for extended reality]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ke Li</author><author>Fariba Mostajeran</author><author>Sebastian Rings</author><author>Julia Hertel</author><author>Susanne Schmidt</author><author>Michael Arz</author><author>Frank Steinicke</author>
        <description><![CDATA[Intelligent Virtual Agents (IVAs), which embody an artificial intelligence (AI) in a humanoid representation, have enormous potential for immersive extended reality (XR) environments to enable natural and engaging human-AI interactions. With recent advances in large language models (LLMs) in simulating human-like text responses, interest in anthropomorphic embodied IVAs has grown across extended reality (XR) research and application domains. However, toolkits for authoring and interacting with IVAs in research remain sparse. Therefore, we present Anthropomorphic AI, a flexible and scalable open-source research toolkit for authoring and interacting with embodied IVAs with rich multimodal capabilities, including speech, gaze, gestures, facial expressions, and vision. Our system enables developers to create various embodied anthropomorphic IVAs by customizing behavior through expressive nonverbal cues, selecting and combining different foundation models, speech-to-text (STT) and text-to-speech (TTS) methods, and adapting the system prompt to guide interaction. We also integrate various features such as proximity detection, trajectory-based action recognition, and vision-based multimodal prompting for supporting natural human-IVA interaction in immersive XR. We evaluate the toolkit through four use case demonstrations, a pilot developer evaluation, and an pilot end-user evaluation in immersive VR, showing its capability in generating anthropomorphic IVAs for immersive XR applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1819537</link>
        <title><![CDATA[Editorial: New frontiers in immersive technologies: expanding the scope of telepresence, monitoring, and intervention]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Salvatore Livatino</author><author>Adam Wojciechowski</author><author>Hai-Ning Liang</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1757251</link>
        <title><![CDATA[Eye tracking in virtual reality for neurorehabilitation: a narrative perspective on needs, challenges, and pathways beyond game engines]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Minxin Cheng</author><author>Leanne Chukoskie</author>
        <description><![CDATA[Virtual reality (VR) systems with integrated eye tracking offer a powerful way to study and support sensorimotor and cognitive function in neurorehabilitation. Eye movements provide a high-bandwidth window onto information processing, visuomotor integration, cognitive load, and affect, while immersive VR enables more ecologically valid yet controllable tasks spanning visual exploration, movement execution, object interaction, and social exchange. This narrative review synthesizes recent work on eye tracking in VR for neurorehabilitation, focusing on three application domains: assessment, intervention, and supportive design, together with the technical and governance requirements needed to make these systems clinically meaningful and ethically responsible. We highlight how the dominant implementation pattern of integrated headsets streaming preprocessed gaze rays into game engines introduces black-box processing, frame-bound timing, and limited calibration control that pose threats to validity, reproducibility, and cross-site comparability. We review emerging workarounds, including modular architectures that decouple sensing and rendering, explicit latency benchmarking and cross-modal synchronization, adaptive and implicit calibration approaches, and privacy-by-design frameworks from digital phenotyping and metaverse healthcare. Taken together, the evidence suggests that eye-tracked VR is already capable of supporting informative assessments and promising interventions, but that realizing its full potential for neurorehabilitation will require a shift toward architectures that support transparent control over sampling, calibration, timing, and data governance, as well as handling eye tracking data as both a sensitive clinical signal and a protected form of personal data.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1764455</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1764455</link>
        <title><![CDATA[Virtual climbing: climb in place with four limbs]]></title>
        <pubdate>2026-02-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Takashi Mitsuda</author><author>Shoichi Kimura</author>
        <description><![CDATA[This paper presents the development of a virtual climbing application using commercially available virtual reality (VR) equipment. The application enables users to simulate climbing a virtual wall with their hands and feet while remaining physically grounded. In contrast to real climbing, where climbers support their bodies by gripping and stepping on holds, the virtual environment requires users to maintain postural balance with one foot suspended in midair. This discrepancy highlights a fundamental difference in the motion dynamics between real and virtual experiences. This study analyzed this inconsistency and proposed methods to enhance the sensation of natural climbing in VR. The experimental results revealed that the application provided an accessible and enjoyable experience, allowing users to perform climbing-like movements using both hands and feet. However, their sense of climbing realism and feelings of strangeness varied individually. The mismatch in motion coordination did not significantly impair realism and enjoyment; however, the absence of tactile feedback—specifically, the sensation of force through the hands and feet—resulted in perceptual gaps for users. This paper also describes climbing movements that cannot be performed during virtual climbing in place and discusses potential solutions. These findings offer valuable insights into improving realism and enjoyment in VR-based climbing simulations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1760619</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1760619</link>
        <title><![CDATA[Running in triangles: the effects of continuous and discrete locomotion techniques on spatial orientation in virtual reality - a comparative study]]></title>
        <pubdate>2026-02-03T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Jennifer Brade</author><author>Philipp Stiens</author><author>Annegret Melzer</author><author>Samuel Korb</author><author>Franziska Klimant</author><author>Martin Dix</author>
        <description><![CDATA[As we navigate in real-world environments, our egocentric location representations are seamlessly and automatically refreshed. However, when traversing a virtual space using magical locomotion techniques, it is common to experience disorientation and discomfort due to insufficient sensory input, particularly related to bodily movement. To avert disorientation and discomfort (cybersickness) in virtual reality without limiting overall usage by employing more natural locomotion techniques (redirected walking, treadmill, etc.), alternative approaches must be explored. In the presented experiment, participants engaged in a spatial updating task within a sparse virtual scene and were instructed to return to an initial position following simulated movements. They performed this task using the teleportation method, a purely continuous locomotion approach without self-motion (dash), and a combination of both techniques. All three methods were evaluated over short (3 m) and long (13 m) distances, and cybersickness along with cognitive load were assessed for every condition. Overall, the findings indicated no notable differences in cybersickness, cognitive load, and spatial localization across the conditions, although cognitive load was reduced and spatial localization was improved at shorter distances. For the selected scenario, the results suggest that the extent of continuous locomotion offers only a minimal advantage in spatial orientation and virtually no downside concerning cybersickness.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1718280</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1718280</link>
        <title><![CDATA[Taxonomy of human-system interaction challenges for metaverse integration in industrial maintenance]]></title>
        <pubdate>2026-01-29T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Parul Khanna</author><author>Ramin Karim</author><author>Phillip Tretten</author>
        <description><![CDATA[The metaverse is an emerging technological shift that enhances collaboration, telepresence, and decision-making, and can revolutionise industrial maintenance practices. While immersive technologies, such as Extended Reality (XR) encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), are widely applied in domains like gaming, healthcare, and education, their adoption in industrial workflows remains limited. Its development and implementation carry challenges, especially from a Human-System Interaction (HSI) perspective. The purpose of this research is to understand the key technological issues and challenges associated with the implementation and use of the metaverse in industrial maintenance from an HSI perspective. This study employs a structured, systematic literature review focusing on the metaverse, as enabled by immersive technologies, in the context of industrial maintenance. The reviewed literature was analysed using thematic qualitative analysis to identify recurring HSI-related challenges and to develop a taxonomy categorising these challenges. The analysis resulted in a taxonomy comprising seven key challenge categories: usability, data management, accessibility, user experience (UX), technological performance, environmental and contextual awareness, and trust and transparency. The findings highlight UX as the core factor influencing adoption, as most challenges directly or indirectly impact user experience. The findings indicate that addressing these challenges can enable intuitive, transparent, and reliable metaverse systems tailored to industrial needs. However, advancing the industrial metaverse will require an interdisciplinary approach that combines engineering, human factors, data science, and design to deliver systems that are both technologically advanced and human-centred.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1761291</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1761291</link>
        <title><![CDATA[Correction: Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality]]></title>
        <pubdate>2025-12-22T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Peter Kullmann</author><author>Theresa Schell</author><author>Mario Botsch</author><author>Marc Erich Latoschik</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1628684</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1628684</link>
        <title><![CDATA[Exploratory physical education teachers’ perspectives and intentions to use VR in the classroom context: a cross-sectional qualitative study]]></title>
        <pubdate>2025-11-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>José Pedro Amoroso</author><author>Olia Tsivitanidou</author><author>Marc Sarens</author><author>Efstathios Christodoulides</author><author>Kyriaki Antoniou</author><author>David Silva</author><author>Luís Coelho</author><author>Wouter Cools</author>
        <description><![CDATA[IntroductionThis study explored the potential role of emerging technologies, particularly active Virtual Reality (VR), from a Physical Education (PE) teacher’s perspective. VR technologies, which provide three-dimensional (immersive) simulation environments, have become more accessible and cost-effective in recent years. Using this technology to train students in various PE areas may add value.ObjectivesThe study aimed to understand PE teachers’ knowledge of VR and their expectations for teaching PE using VR in classroom settings. Specifically, we explored the experiences, challenges, and potential benefits perceived by PE teachers across four European countries.ParticipantsThirty-eight PE teachers from Portugal, Belgium, Italy, and Cyprus participated voluntarily.DesignThis qualitative study employed a phenomenological approach. Data were collected between March and May 2024 in public and private secondary schools with ethical approval.MethodsData was gathered through open-ended focus group questions and analysed using a thematic analysis approach.ResultsResponses revealed varied experience levels with VR. Most participants expressed a willingness to use VR in PE, showing enthusiasm for new technologies and cautious optimism about integration. While recognizing its potential, respondents highlighted limitations. Technical barriers included Internet issues, limited technical skills, and lack of IT support. These reflect the challenges of implementing VR in schools. Teachers valued VR’s potential to expose students to otherwise inaccessible sports and activities. They also discussed its use for improving specific skills, such as first aid, game tactics, and individual sports techniques.ConclusionIntegrating VR into PE presents both challenges and opportunities. Addressing training, financial, and logistical issues may enhance student engagement and learning outcomes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1570383</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1570383</link>
        <title><![CDATA[Desktop versus VR for collaborative sensemaking]]></title>
        <pubdate>2025-11-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ying Yang</author><author>Tim Dwyer</author><author>Zachari Swiecki</author><author>Benjamin Lee</author><author>Michael Wybrow</author><author>Maxime Cordeil</author><author>Teresa Wulandari</author><author>Bruce H. Thomas</author><author>Mark Billinghurst</author>
        <description><![CDATA[Immersive environments enable people to share a workspace in a more spatial and embodied manner than traditional desktop collaboration platforms. However, it remains unclear whether such differences support collaborators in sharing information to build mutual understanding during sensemaking. To investigate this, we conducted a user study with groups of four participants—each given exclusive starting information—using mind maps as a medium for information sharing and collaborative sensemaking. Participants used both the VR and desktop systems we developed to complete sensemaking tasks. Our results reveal that the primary focuses of mind-mapping activities differed between VR and desktop: participants in VR engaged more in problem solving, whereas on desktop they concentrated more on mind map organisation. We synthesise our results from post hoc analysis, observations and subjective feedback, and attribute the discrepancies to the fundamental distinctions between the affordances of traditional desktop tools and embodied presence and interactions in VR. We therefore suggest additional features that facilitate mind map authoring and organisation such as automatic mechanisms be considered essential in future immersive mind-mapping systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1622605</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1622605</link>
        <title><![CDATA[Panoramic imaging in immersive extended reality: a scoping review of technologies, applications, perceptual studies, and user experience challenges]]></title>
        <pubdate>2025-09-10T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Muhammad Tukur</author><author>Sara Jashari</author><author>Mahmood Alzubaidi</author><author>Babatunde Abiodun Salami</author><author>Yehia Boraey</author><author>Sindy Yong</author><author>Dina Saleh</author><author>Giovanni Pintore</author><author>Enrico Gobbetti</author><author>Jens Schneider</author><author>Noora Fetais</author><author>Marco Agus</author>
        <description><![CDATA[Panoramic imaging plays a pivotal role in creating immersive experiences within Extended Reality (XR) environments, including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). This paper presents a scoping review of the research on panoramic-based XR technologies, focusing on both static and dynamic 360° imaging techniques. The study analyzes 39 primary studies published between 2020 and 2024, offering insights into the technological frameworks, applications, and limitations of these XR systems. The findings reveal that education, tourism, entertainment, and gaming are the most dominant sectors leveraging panoramic-based XR, accounting for 28.21%, 25.64%, 23.08%, and 20.51% of the reviewed studies, respectively. In contrast, challenges such as high computational demands, low image quality and depth perception, and bandwidth and latency issues are among the critical limitations identified in 28.21%, 23.08%, and 15.38% of the studies, respectively. The analysis also explores the level of user interaction and immersion supported by these systems, specifically in terms of degrees of freedom (DoF). A majority of the studies (56.41%) offer 3DoF, which allows users to look around within a static position, while only 35.90% provide 6DoF, enabling full movement in space. This indicates that most panoramic XR applications currently support limited interaction, though 6DoF systems are being adopted in a notable portion of the reviewed work to enable more immersive experiences. The review further examines key perceptual studies related to user experiences, including visual perception, presence and immersion, cognitive load and attention distribution, and spatial awareness in panoramic XR environments. In addition, user experience challenges such as discrepancies in spatial and movement perception, along with cybersickness, are among the most commonly reported issues. The paper concludes by outlining future research directions aimed at addressing these challenges, optimizing system performance, reducing user discomfort, and expanding the applicability of panoramic-based XR technologies in fields such as healthcare, industrial training, and remote collaboration.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1594350</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1594350</link>
        <title><![CDATA[Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality]]></title>
        <pubdate>2025-08-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Peter Kullmann</author><author>Theresa Schell</author><author>Mario Botsch</author><author>Marc Erich Latoschik</author>
        <description><![CDATA[In co-located extended reality (XR) experiences, headsets occlude their wearers’ facial expressions, impeding natural conversation. We introduce two techniques to mitigate this using off-the-shelf hardware: compositing a view of a personalized avatar behind the visor (“see-through visor”) and reducing the headset’s visibility and showing the avatar’s head (“head substitution”). We evaluated them in a repeated-measures dyadic study (N = 25) that indicated promising effects. Collaboration with a confederate with our techniques, compared to a no-avatar baseline, resulted in quicker consensus in a judgment task and enhanced perceived mutual understanding. However, the avatar was also rated and commented on as uncanny, though participant comments indicate tolerance for avatar uncanniness since they restore gaze utility. Furthermore, performance in an executive task deteriorated in the presence of our techniques, indicating that our implementation drew participants’ attention to their partner’s avatar and away from the task. We suggest giving users agency over how these techniques are applied and recommend using the same representation across interaction partners to avoid power imbalances.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1629908</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1629908</link>
        <title><![CDATA[“Did you hear that?”: Software-based spatial audio enhancements increase self-reported and physiological indices on auditory presence and affect in virtual reality]]></title>
        <pubdate>2025-07-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ifigeneia Mavridou</author><author>Ellen Seiss</author><author>Giuseppe Ugazio</author><author>Mark Harpster</author><author>Phillip Brown</author><author>Sophia Cox</author><author>Filip Panchevski</author><author>Christine Erie</author><author>David Lopez</author><author>Ryan Copt</author><author>Charles Nduka</author><author>James Hughes</author><author>Joseph Butera</author><author>Daniel N. Weiss</author>
        <description><![CDATA[IntroductionThis study investigates the impact of a software-based audio enhancement tool Q6 in virtual reality (VR), examining the relationship between spatial audio, immersion, and affective responses using self-reports and physiological measures.MethodsSixty-eight participants experienced two VR scenarios, i.e., a commercial game (Job Simulator) and a non-commercial simulation (Escape VR), under both enhanced and normal audio conditions. In this paper we propose a dual-method assessment approach, combining self-reports with moment-bymoment physiological data analysis, emphasizing the value of continuous physiological tracking for detecting subtle changes in electrophysiology in VR simulated experiences.ResultsResults show that enhanced ‘localised’ audio sounds significantly improved perceived sound quality, immersion, sound localization, and emotional involvement. Notably, commercial VR content exhibited a stronger response to audio enhancements than non-commercial simulations,likely due to sound architecture. The commercial content featured meticulously crafted sound design, while the non-commercial simulation had numerous sounds less spatially structured, resulting in a less coherent auditory experience. Enhanced audio additionally intensified both positive and negative affective experiences during key audiovisual events.DiscussionIn this paper we propose a dual-method assessment approach, combining self-reports with moment-bymoment physiological data analysis, emphasizing the value of continuous physiological tracking for detecting subtle changes in electrophysiology in VR simulated experiences. Our findings support software-based audio enhancement as a cost-effective method to optimize auditory involvement in VR without additional hardware. This research provides valuable insights for designers and researchers aiming to improve audiovisual experiences and highlights future directions for exploring adaptive audio technologies in immersive environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1598776</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1598776</link>
        <title><![CDATA[Enhancing eyes-free interaction in virtual reality using sonification for multiple object selection]]></title>
        <pubdate>2025-07-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yota Takahara</author><author>Arinobu Niijima</author><author>Chanho Park</author><author>Takefumi Ogawa</author>
        <description><![CDATA[In virtual reality (VR) environments, selecting and manipulating multiple out-of-view objects is often challenging because most current VR systems lack integrated haptics. To address this limitation, we propose a sonification method that guides users’ hands to target objects outside their field of view by assigning distinct auditory parameters (pan, frequency, and amplitude) to the three spatial axes. These parameters are discretized into three exponential steps within a comfortable volume (less than 43 dB) and frequency range (150–700 Hz), determined via pilot studies to avoid listener fatigue. Our method dynamically shifts the sound source location depending on the density of the target objects: when objects are sparsely positioned, each object serves as its own sound source, whereas for dense clusters, a single sound source is placed at the cluster’s center to prevent overlapping sounds. We validated our technique through user studies involving two VR applications: a shooting game that requires rapid weapon selection and a 3D cube keyboard for text entry. Compared to a no-sound baseline, our sonification significantly improved positional accuracy in eyes-free selection tasks. In the shooting game, participants could more easily swap weapons without losing sight of on-screen action, while in the keyboard task, typing accuracy more than doubled during blind entry. These findings suggest that sonification can substantially enhance eyes-free interaction in VR without relying on haptic or visual cues, thereby offering a promising avenue for more efficient and comfortable VR experiences.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1587768</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1587768</link>
        <title><![CDATA[Developing and evaluating the fidelity of virtual reality-artificial intelligence (VR-AI) environment for situated learning]]></title>
        <pubdate>2025-07-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>April Tan</author><author>Michael C. Dorneich</author><author>Elena Cotos</author>
        <description><![CDATA[IntroductionSocialization is crucial for facilitating disciplinary enculturation, yet traditional classroom instruction often lacks authentic socialization opportunities, limiting students’ exposure to their disciplinary communities. To address this gap, this study develops an immersive Virtual Reality-Artificial Intelligence (VR-AI) environment that simulates academic conference poster sessions. Learners interact with AI-driven agents, engaging in discussions and receiving real-time feedback on research communication. This study focuses on developing, operationalizing, and evaluating the fidelity of the VR-AI environment across four key dimensions: physical, functional, psychological, and social fidelity.MethodsTwenty participants tested the environment, completing two learning tasks: engaging with poster presenters and reflecting with a major professor. Fidelity was assessed using mixed methods, including presence questionnaires, workload assessments, behavioral observations, and semi-structured interviews. ResultsFindings indicate high physical and functional fidelity, with participants describing the environment as immersive and reflective of real-world academic settings. Psychological fidelity was also well represented, as learners engaged in cognitively demanding research discussions and rhetorical reflection. However, social fidelity remained a challenge, as AI agents struggled with conversational turn-taking and response length, reducing the authenticity of academic exchanges.DiscussionThese findings highlight the potential of VR-AI environments for disciplinary socialization while underscoring the need for refined AI-driven interaction designs to support more fluid, reciprocal dialogue.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1555173</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1555173</link>
        <title><![CDATA[Milo: an LLM-based virtual human open-source platform for extended reality]]></title>
        <pubdate>2025-06-05T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Alon Shoa</author><author>Doron Friedman</author>
        <description><![CDATA[Large language models (LLMs) have made dramatic advancements in recent years, allowing for a new generation of dialogue agents. This allows for new types of social experiences with virtual humans, in both virtual and augmented reality. In this paper, we introduce an open-source system specifically designed for implementing LLM-based virtual humans within extended reality (XR) environments. Our system integrates into XR platforms, providing a robust framework for the creation and management of interactive virtual agents. We detail the design and architecture of the system and showcase the system’s versatility through various scenarios. In addition to a straightforward single-agent setup, we demonstrate how an LLM-based virtual human can attend a multi-user virtual reality (VR) meeting, enhance a VR self-talk session, and take part in an augmented reality (AR) live event. We provide lessons learned, with focus on the possibilities for human intervention during live events. We provide the system as open-source, inviting collaboration and innovation within the community, paving the way for new types of social experiences.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1583474</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1583474</link>
        <title><![CDATA[Avatars for the masses: smartphone-based reconstruction of humans for virtual reality]]></title>
        <pubdate>2025-05-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Timo Menzel</author><author>Erik Wolf</author><author>Stephan Wenninger</author><author>Niklas Spinczyk</author><author>Lena Holderrieth</author><author>Carolin Wienrich</author><author>Ulrich Schwanecke</author><author>Marc Erich Latoschik</author><author>Mario Botsch</author>
        <description><![CDATA[Realistic full-body avatars play a key role in representing users in virtual environments, where they have been shown to considerably improve important effects of immersive experiences such as body ownership and presence. Consequently, the demand for realistic virtual humans – and methods for creating them – is rapidly growing. However, despite extensive research into 3D reconstruction of avatars from real humans, an easy and affordable method for generating realistic and VR-capable avatars is still lacking: Existing methods are either limited to complex capture hardware and/or controlled lab environments, do not provide sufficient visual fidelity, or cannot be rendered at sufficient frame rates for multi-avatar VR applications. To make avatar reconstruction widely available, we developed Avatars for the Masses – a client-server-based online service for scanning real humans with an easy-to-use smartphone application that empowers even non-expert users to capture photorealistic and VR-ready avatars. The data captured by the smartphone is transferred to a reconstruction server, where the avatar is generated in a fully automated process. Our advancements in capturing and reconstructing allow for higher-quality avatars even in less controlled in-the-wild environments. Extensive qualitative and quantitative evaluations show our method’s avatars to be on par with the ones generated by expensive expert-operated systems. It also generates more accurate replicas in comparison to the current state of the art in smartphone-based reconstruction, produces much less artifacts and provides a much higher rendering performance in VR in comparison to three representative neural methods. A comprehensive user study confirms similar perception results compared to avatars reconstructed with expensive expert-operated systems, and it underscores a sufficient usability of the overall system. To truly bring avatars to the masses, we will make our smartphone application publicly available for research purposes. More details can be found on the project page: https://avatars.cs.tu-dortmund.de.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1586875</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1586875</link>
        <title><![CDATA[Embracing differences in virtual reality: inclusive user-centered design of bimanual interaction techniques]]></title>
        <pubdate>2025-05-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Judith Hartfill</author><author>Shirin Hajahmadi</author><author>Susanne Schmidt</author><author>Gustavo Marfia</author><author>Frank Steinicke</author>
        <description><![CDATA[IntroductionVirtual Reality (VR) applications often require two-handed interactions, which can pose accessibility challenges for users with missing limbs or limited mobility in the arms or hands. This paper investigates how to make bimanual input more accessible and inclusive using electromyography and motion tracking.MethodsThrough an inclusive user-centered design approach, we developed three interaction techniques after interviewing a person with unilateral upper limb differences. To assess baseline metrics on the efficiency and usability of the three prototypes, a user study was conducted with 26 participants without upper limb differences.Results, discussion, study 1 We found that those interaction methods can be as efficient as unimanual interactions, even without prior learning, showing the potential of electromyography and motion tracking for bimanual interaction in VR.Methods study 2In a second user study, feedback was gathered from four participants with unilateral upper limb impairments to refine the interaction techniques and identify accessibility barriers in the design.Results and discussion study 2Results of the thematic analysis indicate that people with upper limb differences enjoyed the proposed bimanual interaction techniques, while they suggested improvements in ergonomics and system stability.]]></description>
      </item>
      </channel>
    </rss>