<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | Augmented Reality section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality/sections/augmented-reality</link>
        <description>RSS Feed for Augmented Reality section in the Frontiers in Virtual Reality journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-11T12:03:20.526+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2026.1820851</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2026.1820851</link>
        <title><![CDATA[Augmented reality in engineering education: a validated adoption framework for sustainable implementation]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nasija Suhail</author><author>Vian Ahmed</author><author>Zied Bahroun</author><author>Sara Saboor</author>
        <description><![CDATA[IntroductionAugmented reality (AR) is increasingly used in engineering education to support immersive visualization and interactive learning, particularly for concepts that require spatial reasoning and interpretation of complex structures. However, research remains dominated by single-course prototypes and isolated implementations, providing limited evidence on the institutional and human factors required for sustainable, curriculum-level adoption. This study develops and validates an adoption framework to guide strategic implementation of AR in engineering education.MethodsUsing a mixed-methods design, we first conducted semi-structured interviews with key stakeholders (educators, students, and technology vendors) to elicit adoption enablers, motivators, barriers, and institutional actions. These insights informed a survey of engineering educators and students in UAE universities (N = 151; 101 students, 50 educators). The framework was validated using Confirmatory Factor Analysis (CFA), Partial Least Squares Structural Equation Modeling (PLS-SEM), and the Relative Importance Index (RII).ResultsResults show that institutional support, faculty training, and awareness are the strongest enabling conditions, while technical limitations and resource constraints are the primary barriers. Perceived instructional value, particularly enhanced visualization, improved learning outcomes, and increased student engagement, significantly strengthens stakeholders’ acceptance of AR integration. The final measurement model demonstrated acceptable reliability and validity after removing low-loading items (RMSEA ≈ 0.065; GFI ≈ 0.802; AGFI ≈ 0.754).DiscussionThe study contributes a validated, decision-oriented adoption framework that helps higher education leaders move from pilot deployments to sustainable implementation of AR through aligned policy, capability development, and resource planning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1700915</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1700915</link>
        <title><![CDATA[The validation of a heuristic toolkit for augmented reality: the derby dozen]]></title>
        <pubdate>2025-12-18T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Jessyca L. Derby</author><author>Michelle Aros</author><author>Barbara S. Chaparro</author>
        <description><![CDATA[Augmented Reality (AR) technologies hold tremendous potential across various domains, yet their inconsistent design and lack of standardized user experience (UX) pose significant challenges. This inconsistency hinders user acceptance and adoption and impacts application design efficiency. This study describes the development of a validated UX heuristic checklist for evaluating AR applications and devices. Building upon previous work, this research expands upon this checklist through expert feedback, validation with heuristic evaluations, and user tests with five diverse AR applications and devices. The checklist was refined to 12 heuristics and 109 items and integrated into a single auto-scored Excel spreadsheet toolkit. This validated toolkit, named the Derby Dozen, empowers practitioners to evaluate AR experiences, quantify results, and ultimately inform better design practices, promoting greater usability and user satisfaction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1709269</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1709269</link>
        <title><![CDATA[Use of augmented reality with image fusion to facilitate surgical stoma creation: an IDEAL stage 2A case series]]></title>
        <pubdate>2025-12-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bradley B. Pua</author><author>Shoichiro Urabe</author><author>Anupam S. Chauhan</author><author>Alfredo Ormeno Zuniga</author><author>Mauro Dominguez</author><author>Davide Punzo</author><author>Andras Lasso</author><author>Art Sedrakyan</author><author>Jeffrey W. Milsom</author>
        <description><![CDATA[IntroductionAugmented reality (AR) has been increasingly applied to surgical procedures in fixed anatomical organs like brain, bones, aorta and kidneys, enabling image-guided precision, but sparingly to mobile organs such as the intestines. We report our initial experience with AR-guided intestinal stoma creation using an “image-guided” minimally invasive approach.MethodsAdult patients requiring elective or urgent stoma creation for colonic decompression or diversion were included. Patient-specific 3D reconstructions of the relevant portion of the GI tract and reference organs (skin, bones, vessels) from a preoperative CT were co-registered intraoperatively via a head-mounted Augmented reality device (HoloLens2) onto the patient’s body using surface landmarks visible such as the umbilicus, bones, and prior surgical scars. A trajectory to the target bowel loop based on AR was marked on the skin, and stoma creation was performed at this site. Targeting of the correct bowel loop was confirmed with intraoperation fluoroscopy using intralumenal contrast injection. Technical success was defined as completion at the targeted site without open surgery.ResultsFourteen patients underwent AR-guided stoma creation (9 colostomies, 5 ileostomies). Indications were bowel obstruction (n = 6), fistula (n = 5), anastomotic leak (n = 1), perforation (n = 1) and gastrointestinal bleeding (n = 1). Median age was 76 years, median BMI 23.8 kg/m2. The median (range) number of prior abdominal surgeries was 2 (0–11). The median operative time was 131 min (interquartile range [IQR]: 96–143). The approach was either cut down directly over the stoma site (n = 11) or laparoscopic assisted (n = 3). AR permitted precise identification of the bowel loop required for stoma creation in all cases and help to avoid need for standard open surgery. Median postoperative stay was 7 days (interquartile range: 3–10). No Clavien-Dindo grade III or IV complications, reoperations, or unplanned readmissions were observed. Two postoperative deaths occurred in ASA 4 patients, both due to the underlying malignancy and multiorgan failure preoperatively, unrelated to the surgical procedure.ConclusionThis early experience suggests AR methods may identify and target a loop of bowel, play a useful role in intestinal stoma creation, with potential to avoid need for laparoscopy or extensive open surgery. Further clinical application and refinement are warranted.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1710161</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1710161</link>
        <title><![CDATA[Evaluating interaction design and user experience in augmented reality: a systematic review]]></title>
        <pubdate>2025-12-03T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Claire L. Hughes</author><author>Waldemar Karwowski</author>
        <description><![CDATA[BackgroundAugmented Reality (AR) technologies are rapidly advancing, offering new opportunities for interactive and immersive user experiences. However, the success of AR applications depends significantly on thoughtful interaction design and robust evaluation of user experience (UX). While conventional WIMP (Windows, Icons, Menus, Pointer) interfaces have dominated interface design, they present notable limitations in spatial, embodied environments like AR.ObjectivesThe main purpose of the current paper is to systematically review the state of AR interaction design and UX evaluation, with particular focus on the use of natural versus WIMP-based interaction paradigms. This review aims to assess how different interaction methods are implemented and evaluated, identify underexplored areas, and offer recommendations to guide future AR research and development. Methods: In this systematic review, Compendex, Web of Science, ScienceDirect, ACM Digital, IEEE, and Springer Computer Science were systematically queried for journal articles in order to explore the relationship between interaction design and user experience in AR. Following PRISMA guidelines, 86 peer-reviewed journal articles published between 2013 and 2024 were included based on predefined inclusion and exclusion criteria. Data were extracted and analyzed in terms of context of use, device types, interaction methods, and UX evaluation strategies.ResultsThe findings show that natural interactions, such as gesture, voice, and gaze, are increasingly favored in AR research due to their alignment with spatial and embodied interaction needs. Hybrid systems combining natural and WIMP elements were the most common, with natural components driving the experiential benefits. UX evaluation in AR remains heavily reliant on self-reported measures, with questionnaires like SUS and NASA-TLX dominating. Objective and physiological assessments were rarely used. Usability and cognitive load were the most frequently evaluated UX aspects, while immersive, social, and emotional dimensions remain significantly underexplored. Head-worn displays (HWDs), particularly HoloLens 2, were the most studied devices, although mobile platforms also played a major role in accessible AR design.ConclusionThis review provides insight into how UX is being considered in AR system development and highlights key trends, strengths, and gaps in current research. It underscores the need for more diverse evaluation methods and a broader focus on underrepresented experiential dimensions. By adopting mixed-method approaches and prioritizing user-centered, context-aware interaction paradigms, future AR systems can become more intuitive, inclusive, and effective across a range of application domains.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1690439</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1690439</link>
        <title><![CDATA[Design and validation of a mixed reality workflow for structural cardiac procedures in interventional cardiology]]></title>
        <pubdate>2025-11-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jan Hecko</author><author>Daniel Precek</author><author>Jaroslav Januska</author><author>Miroslav Hudec</author><author>Katerina Barnova</author><author>Martina Palickova-Mikolasova</author><author>Matej Pekar</author><author>Jan Chovancik</author><author>Libor Sknouril</author><author>Otakar Jiravsky</author>
        <description><![CDATA[BackgroundMixed reality (MR) technologies, such as those integrating Unity and Microsoft HoloLens 2, hold promises for enhancing non-coronary interventions in interventional cardiology by providing real-time 3D visualizations, multi-user collaboration, and gesture-based interactions. However, barriers to clinical adoption include insufficient validation of performance, usability, and workflow integration, aligning with the Research Topic on transforming medicine through extended reality (XR) via robust technologies, education, and ethical considerations. This study addresses these gaps by developing and rigorously evaluating an MR system for procedures like transcatheter valve replacements and atrial septal defect repairs.MethodsThe system was built using Unity with modifications to the UnityVolumeRendering plugin for Digital Imaging and Communications in Medicine (DICOM) data processing and volume rendering, Mixed Reality Toolkit (MRTK) for user interactions, and Photon Unity Networking (PUN2) for multi-user synchronization. Validation involved technical performance metrics (e.g., frame rate, latency), measured via Unity Profiler and Wireshark during stress tests. Usability was assessed using the System Usability Scale (SUS) and NASA Task Load Index (NASA-TLX), as well as through task-based trials. Workflow integration was evaluated in a simulated cath-lab setting with six cardiologists, focusing on calibration times and responses to a custom questionnaire. Statistical analysis included means ± standard deviation (SD) and 95% confidence intervals.ResultsTechnical benchmarks showed frame rates of 59.6 ± 0.7 fps for medium datasets, local latency of 14.3 ± 0.5 ms (95% CI: 14.1–14.5 ms), and multi-user latency of 26.9 ± 12.3 ms (95% CI: 23.3–30.5 ms), with 91% gesture recognition accuracy. Usability yielded a SUS score of 77.5 ± 3.8 and NASA-TLX of 37 ± 7, with task completion times under 60 s. Workflow metrics indicated 38 s calibration and high communication benefits (4.5 ± 0.2 on a 1–5 scale).ConclusionThis validated MR solution demonstrates feasibility for precise, collaborative cardiac interventions, paving the way for broader XR adoption in medicine while addressing educational and ethical integration challenges.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1649901</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1649901</link>
        <title><![CDATA[Luminosity thresholds in projection mapping under environmental lighting]]></title>
        <pubdate>2025-10-24T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Masaki Takeuchi</author><author>Kowa Koida</author><author>Daisuke Iwai</author>
        <description><![CDATA[Projection mapping alters the visual appearance of objects by projecting images onto their surfaces. Traditionally, its application has been limited to dark environments because the contrast of the projected image diminishes in environmental lighting. This often results in the target appearing self-luminous, creating a perceptually unnatural effect. Recently, however, projection systems have been developed that maintain high contrast even in well-lit environments. Studies have shown that projections in bright rooms can shift perception from an appearance of self-luminosity to one of being illuminated. This advancement holds significant promise for applications that require visual naturalness, such as product design. Nonetheless, the influence of projected content on perception and the underlying mechanisms of perceptual color transitions in projection targets remain unclear. In this study, we found that the presence or absence of patterns in the projected content affects the luminosity threshold at which the projection target is perceived as self-luminous. Previous research in perception has suggested that the visual system relies on intrinsic criteria to determine whether an object is self-luminous. However, our results revealed that in projection mapping, the internal reference for color perception, developed through observations of colors in daily life, does not always apply. These results indicate the existence of perceptual phenomena unique to projection mapping. This insight is crucial for product design, as it aims to achieve representations that closely resemble the appearance of real-world objects.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1652074</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1652074</link>
        <title><![CDATA[A critical appraisal of computer vision in orthodontics]]></title>
        <pubdate>2025-10-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Elie Amm</author><author>Melih Motro</author><author>Marc Fisher</author><author>Vlad Surdu</author><author>E. Brandon Strong</author><author>Jeffrey Potts</author><author>Christian El Amm</author><author>Suhair Maqusi</author>
        <description><![CDATA[ObjectiveTo evaluate the precision of a computer vision (CV) and augmented reality (AR) pipeline for orthodontic applications, specifically in direct bonding and temporary anchorage device (TAD) placement, by quantifying system accuracy in six degrees of freedom (6DOF) pose estimation.MethodsA custom keypoint detection model (YOLOv8n-pose) was trained using over 1.5 million synthetic images and a supplemental manually annotated dataset. Thirty anatomical landmarks were defined across maxillary and mandibular arches to maximize geometric reliability and visual detectability. The system was deployed on a Microsoft HoloLens 2 headset and tested using a fixed typodont setup at 55 cm. Pose estimation was performed in “camera space” using Perspective-n-Point (PnP) methods and transformed into “world space” via AR spatial tracking. Thirty-four poses were collected and analyzed. Errors in planar and depth estimation were modeled and experimentally measured.ResultsRotational precision remained below 1°, while planar pose precision was sub-millimetric (X: 0.46 mm, Y: 0.30 mm), except for depth (Z), which showed a standard deviation of 5.01 mm. These findings aligned with theoretical predictions based on stereo vision and time-of-flight sensor limitations. Integration of headset and object pose led to increased Y-axis variability, possibly due to compounded spatial tracking error. Sub-pixel accuracy of keypoint detection was achieved, confirming high performance of the trained detector.ConclusionThe proposed CV-AR system demonstrated high precision in planar pose estimation, enabling potential use in clinical orthodontics for tasks such as TAD placement and bracket positioning. Depth estimation remains the primary limitation, suggesting the need for sensor fusion or multi-angle views. The system supports real-time deployment on mobile platforms and serves as a foundational tool for further clinical validation and AR-guided procedures in dentistry.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1641316</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1641316</link>
        <title><![CDATA[The effects of using augmented reality in rehabilitation and recovery exercise on patients’ outcomes and experiences: a systematic review]]></title>
        <pubdate>2025-10-14T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Deiadra Modlin</author><author>Yu-Tung Kuo</author>
        <description><![CDATA[Augmented reality, also known as AR, supplements reality by allowing the user to experience computer-generated graphics as though they appear in the real world. This literature review’s goal is to present a collection of data on AR-based exercise and rehabilitation applications. This literature review also aims to identify gaps within the existing research. The method of PRISMA was applied to systematic reviews for relevant articles published between 2017 and 2025. The databases include Academic Search Ultimate, British Library Serials, MEDLINE, and ProQuest Central. The results from the literature found that AR for rehabilitation could help patients physically and mentally and improve their motivation and engagement. Different types of AR tools were used to help with the rehabilitation of patients with health issues such as knee injuries or stroke. Questionnaires and medical tests were the common methods to gather data from the patients. AR rehabilitation technology may be able to bring a new form of human-computer interaction for patients.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1613717</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1613717</link>
        <title><![CDATA[An augmented outdoor workout system for jogging and calisthenics support]]></title>
        <pubdate>2025-09-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vincenzo Armandi</author><author>Lorenzo Stacchio</author><author>Pasquale Cascarano</author><author>Shirin Hajahmadi</author><author>Lorenzo Donatiello</author><author>Gustavo Marfia</author>
        <description><![CDATA[In this paper we introduce M-AGEW (Magic AuGmentEd Workout), an Augmented Reality (AR) application that assists users during outdoor, high-dynamic workouts such as jogging and calisthenics. It is a client-server-based system with a proprietary data structure (WKAN) that dynamically defines sequences of workouts as finite state machines. M-AGEW adapts workout intensity dynamically based on real-time sensor data and overlays contextual AR feedback and guidance, including biometric readings and a virtual coaching avatar. The technology was created through a user-centric design process, supported by an initial user study and industrial partnership. We validate M-AGEW through a technology acceptance evaluation with professional athletes, reporting promising results in usability, enjoyment, and perceived usefulness. Our findings suggest that AR headsets can effectively enhance and supplement outdoor physical activity, offering a motivating alternative to standard fitness monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1649785</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1649785</link>
        <title><![CDATA[Programmable reality]]></title>
        <pubdate>2025-09-09T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Ryo Suzuki</author><author>Parastoo Abtahi</author><author>Chen Zhu-Tian</author><author>Mustafa Doga Dogan</author><author>Andrea Colaco</author><author>Eric J. Gonzalez</author><author>Karan Ahuja</author><author>Mar Gonzalez-Franco</author>
        <description><![CDATA[Innovations in spatial computing and artificial intelligence (AI) are making it possible to overlay dynamic, interactive digital elements on the physical world. Soon, every object might have a real-time digital twin, enabling the “Internet of Things” so as to identify and interact with even unconnected items. This programmable reality would enable computational manipulation of the world around us through alteration of its appearance or functionality, similar to software, but for reality itself. Advances in AI language models have enabled zero-shot segmentation and understanding of the world, making it possible to query and manipulate objects with precision. However, this vision also demands natural and intuitive ways for humans to interact with these models through gestures, gaze, and existing devices. Augmented reality (AR) provides the ideal bridge between AI output and human input in the physical world. Moreover, diffusion models and physics simulations offer exciting possibilities for content generation and editing, allowing us to transform everyday activities into extraordinary experiences. As AR devices become ubiquitous and indistinguishable from reality, these technologies blur the lines between reality and simulations. This raises profound questions about how we perceive and experience the world while having implications for memory, learning, and even behavior. Programmable reality enabled by AR and AI has vast potential to reshape our relationships with the digital realm, ultimately making it an extension of the physical realm.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1679670</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1679670</link>
        <title><![CDATA[Correction: A design toolkit for task support with mixed reality and artificial intelligence]]></title>
        <pubdate>2025-09-01T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Arthur Caetano</author><author>Alejandro Aponte</author><author>Misha Sra</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1592287</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1592287</link>
        <title><![CDATA[ARchitect: advancing architectural visualization and interaction through handheld augmented reality]]></title>
        <pubdate>2025-08-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sabahat Israr</author><author>Mudassar Ali Khan</author><author>Muhammad Shahid Anwar</author><author>Kamran Ahmad Awan</author><author>Saoucene Mahfoudh</author><author>Turki Althaqafi</author><author>Wadee Alhalabi</author>
        <description><![CDATA[The architecture, engineering, and construction industry requires enhanced tools for efficient collaboration and user-centric designs. Traditional visualization methods relying on 2D/3D CAD models often fall short of modern demands for interactivity and context-aware representation. To address this limitation, this study introduces ARchitect, a mobile-based markerless augmented reality (AR) framework aimed at revolutionizing architectural artifact visualization and interaction. The proposed approach enables users to dynamically overlay and manipulate 3D architectural elements, such as roofs, windows, and doors, within their physical environment using AR raycasting and device sensors. Algorithms supporting translation, rotation, and scaling allow precise adjustments to model placement while integrating metadata to enhance design comprehension. Real-time lighting adaptation ensures seamless environmental blending, and the framework’s usability is quantitatively evaluated using the Handheld Augmented Reality Usability Scale (HARUS). ARchitect achieved a usability score of 89.2, demonstrating significant improvements in user engagement, accuracy, and decision-making compared to conventional methods.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1590871</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1590871</link>
        <title><![CDATA[Study on font preferences of native and non-native speakers in a virtual reality environment]]></title>
        <pubdate>2025-07-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Huidan Zhang</author><author>Daisuke Sakamoto</author><author>Tetsuo Ono</author>
        <description><![CDATA[IntroductionWith the growing use of virtual reality (VR) in areas like education and digital reading, understanding the factors that impact legibility in these environments is crucial. While traditional screen legibility has been extensively studied, the transition to VR requires reevaluation, especially when considering different languages and the distinction between native and non-native speakers.MethodThis study explores font preferences in VR for Chinese, Japanese, and English, focusing on font weight, style, complexity, and viewing distance. Additionally, we employed cross-linguistic VR-based experiments with quantitative assessments and qualitative interviews.ResultOur findings reveal that font preferences are influenced by a combination of language familiarity (native/non-native), viewing distance, and character complexity (glyph). Therefore, serif fonts enhance the legibility of complex logographic characters at close distances, whereas sans-serif fonts are more effective for alphabetic scripts, particularly at longer viewing distances. Moreover, when processing unfamiliar languages, users tend to shift their evaluation criteria from focusing primarily on legibility to a more balanced assessment that also incorporates aesthetic appeal.DiscussionThese insights underscore the importance of adaptive typographic strategies in VR, offering evidence-based guidelines that can enhance both legibility and user experience for a diverse global audience.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1580619</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1580619</link>
        <title><![CDATA[Applications of augmented reality in cardiology till 2024: a comprehensive review of innovations and clinical impacts]]></title>
        <pubdate>2025-07-23T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Ibthisam Ismail Sharieff</author><author>Diviya Bharathi Ravikumar</author><author>Shashvat Joshi</author><author>Barath Prashanth Sivasubramanian</author><author>Rajat Gupta</author><author>Yash Garg</author><author>Umabalan Thirupathy</author><author>Ragavendar Saravanabavanandan</author><author>Siva Naga Yarrarapu</author><author>Vikramaditya R. Samala Venkata</author>
        <description><![CDATA[IntroductionAugmented reality (AugR) is becoming a widely recognized and innovative platform in global healthcare. AugR has revolutionized cardiology by enhancing the understanding of cardiac structure and function. This review highlights its applications in diagnosis, surgical planning, cardiac procedures, training, rehabilitation, and the future impact of AugR-related technology.MethodsThis review compiles original research and review articles on AugR in cardiology from PubMed till 2024.ResultsAdvancements in visualization and image processing techniques facilitate the development of AugR tools using holographic displays, enhancing diagnostic accuracy and pre-surgical planning. Current AugR tools offer 3D heart imaging for diagnostic procedures, such as assessing Left Ventricular Ejection Fraction (LVEF). AugR enables real-time visualization for congenital and structural heart diseases, aiding in catheter navigation, transcatheter valve procedures, and arrhythmia treatments. Its effectiveness extends to cardiac resynchronization therapy, ventricular tachycardia ablation, and ultrasound-guided catheterization. AugR surpasses standard 2D fluoroscopy in surgical interventions by optimizing fluoroscopic angles, improving pacemaker placement, reducing X-ray exposure, and increasing procedural accuracy. It also enhances medical training by providing immersive experiences for residents and fellows, improving emergency response training. User-friendly AugR technologies effectively engage patients, promote physical activity, and enhance outcomes in cardiac rehabilitation. Further testing of AugR could serve as a pivotal surgical navigation tool in cardiac transplantology. Mixed reality enhances procedural planning and intraoperative navigation in cardiac electrophysiology by providing real-time 3D visualization and spatial orientation. Holographic visualization techniques combined with 3D and 4D printing hold future potential in cardiac care, particularly for designing patient-specific prosthetics. However, widespread clinical adoption of AugR in many healthcare institutions is limited by technical challenges and high costs related to specialized hardware, software, and maintenance.ConclusionAugR holds great promise in transforming cardiac care, but its clinical integration depends on rigorous trials to validate its effectiveness. While much research remains theoretical, increased human testing is essential for real-world applications. Advancing AugR, alongside technologies like 3D/4D printing and holography, could pave the way for a safer and more precise future in cardiology.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1574965</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1574965</link>
        <title><![CDATA[Exploring AR hand augmentations as error feedback mechanisms for enhancing gesture-based tutorials]]></title>
        <pubdate>2025-06-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Catarina G. Fidalgo</author><author>Yukang Yan</author><author>Mauricio Sousa</author><author>Joaquim Jorge</author><author>David Lindlbauer</author>
        <description><![CDATA[Self-guided tutorials from videos help users learn new skills and complete tasks with varying complexity, from repairing a gadget to learning how to play an instrument. However, users may struggle to interpret 3D movements and gestures from 2D representations due to different viewpoints, occlusions, and depth perception. Augmented Reality (AR) can alleviate this challenge by enabling users to view complex instructions in their 3D space. However, most approaches only provide feedback if a live expert is present and do not consider self-guided tutorials. Our work explores virtual hand augmentations as automatic feedback mechanisms to enhance self-guided, gesture-based AR tutorials. We evaluated different error feedback designs and hand placement strategies on speed, accuracy and preference in a user study with 18 participants. Specifically, we investigate two visual feedback styles — color feedback, which changes the color of the hands’ joints to signal pose correctness, and shape feedback, which exaggerates fingers length to guide correction — as well as two placement strategies: superimposed, where the feedback hand overlaps the user’s own, and adjacent, where it appears beside the user’s hand. Results show significantly faster replication time when users are provided with color or baseline no explicit feedback, when compared to shape manipulation feedback. Furthermore, despite users’ preferences for adjacent placement for the feedback representation, superimposed placement significantly reduces replication time. We found no effects on accuracy for short-time recall, suggesting that while these factors may influence task efficiency, they may not strongly affect overall task proficiency.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1533236</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1533236</link>
        <title><![CDATA[Are you drunk? No, I am CybAR sick! – interacting with the real world via pass-through augmented reality is a sobering discovery]]></title>
        <pubdate>2025-05-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Attila Bekkvik Szentirmai</author><author>Ole Andreas Alsos</author><author>Anne Britt Torkildsby</author><author>Yavuz Inal</author>
        <description><![CDATA[IntroductionThe recent paradigm shift in Augmented Reality (AR) technology features the integration of Pass-Through Augmented Reality (PT-AR) into flagship Extended Reality (XR) devices. PT-AR, from a technological standpoint is Virtual Reality (VR) and significantly differs from AR technologies that used in devices like Google Glass, Magic Leap, or HoloLens, which utilize Optical-See-Through AR. PT-AR renders the user’s physical environment on digital displays, as opposed to providing a direct, natural view of the physical world. This “virtual” digital representation of reality is an unexplored area. What makes AR distinct from other technologies, including VR, is its “reality” aspect. AR overlays, projects, and enhances the user’s physical environment with digital information. Accordingly, the primary scene of interaction in AR is the real world. This study takes a novel approach by focusing on the “reality” aspect of AR. It compares two commercially available PT-AR systems: a low-end smartphone-based device and a high-end dedicated headset. The study examines how each affects users’ comfort, orientation, and task performance during everyday activities in the physical world.MethodsWe employed a mixed-method approach, involving 20 participants with diverse backgrounds in terms of age, gender, and VR/AR experience. We evaluated the impact of PT-AR across three foundational real-world task domains, such as walking, dexterity, and full-body coordination, via NASA Task Load Index (NASA-TLX) and Simulator Sickness Questionnaire (SSQ) assessments, observations, and interviews.ResultsOur findings suggest that current PT-AR solutions negatively affect user comfort, orientation, wellbeing, and task performance. Both systems fall short of AR’s promise of seamless engagement and integration of reality. Participants exhibited symptoms similar to those of intoxication, including loss of body coordination, general discomfort, and difficulties in focusing and concentrating.DiscussionWe argue that PT-AR may introduce a new form of discomfort that differs from well-known issues like cybersickness or motion sickness, which require further research on XR’s “reality” aspects to understand the interaction between human and technological factors comprehensively.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1536393</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1536393</link>
        <title><![CDATA[A design toolkit for task support with mixed reality and artificial intelligence]]></title>
        <pubdate>2025-04-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Arthur Caetano</author><author>Alejandro Aponte</author><author>Misha Sra</author>
        <description><![CDATA[Efficient performance and acquisition of physical skills, from sports techniques to surgical procedures, require instruction and feedback. In the absence of a human expert, Mixed Reality Intelligent Task Support (MixITS) can offer a promising alternative. These systems integrate Artificial Intelligence (AI) and Mixed Reality (MR) to provide realtime feedback and instruction as users practice and learn skills using physical tools and objects. However, designing MixITS systems presents challenges beyond engineering complexities. The complex interactions between users, AI, MR interfaces, and the physical environment create unique design obstacles. To address these challenges, we present MixITS-Kit—an interaction design toolkit derived from our analysis of MixITS prototypes developed by eight student teams during a 10-week-long graduate course. Our toolkit comprises design considerations, design patterns, and an interaction canvas. Our evaluation suggests that the toolkit can serve as a valuable resource for novice practitioners designing MixITS systems and researchers developing new tools for human-AI interaction design.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1552321</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1552321</link>
        <title><![CDATA[Development and evaluation of a mixed reality music visualization for a live performance based on music information retrieval]]></title>
        <pubdate>2025-03-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Matthias Erdmann</author><author>Markus von Berg</author><author>Jochen Steffens</author>
        <description><![CDATA[The present study explores the development and evaluation of a mixed reality music visualization for a live music performance. Real-time audio analysis and crossmodal correspondences were used as design guidelines for creating the visualization, which was presented through a head-mounted-display. To assess the impact of the music visualization on the audience’s aesthetic experience, a baseline visualization was designed, featuring the same visual elements but with random changes of color and movement. The audience’s aesthetic experience of the two conditions (i.e., listening to the same song with different visualizations) was assessed using the Aesthetic Emotions Scale (AESTHEMOS) questionnaire. Additionally, participants answered questions regarding the perceived audiovisual congruence of the stimuli and questionnaires about individual musicality and aesthetic receptivity. The results show that the visualization controlled by real-time audio analysis was associated with a slightly enhanced aesthetic experience of the audiovisual composition compared to the randomized visualization, thereby supporting similar findings reported in the literature. Furthermore, the tested personal characteristics of the participants did not significantly affect aesthetic experience. Significant correlations between these characteristics and the aesthetic experience were observed only when the ratings were averaged across conditions. An open interview provided deeper insights into the participants’ overall experiences of the live music performance. The results of the study offer insights into the development of real-time music visualization in mixed reality, examines how the specific audiovisual stimuli employed influence the aesthetic experience, and provides potential technical guidelines for creating new concert formats.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1515937</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1515937</link>
        <title><![CDATA[Enjoy it! Cosmetic try-on apps and augmented reality, the impact of enjoyment, informativeness and ease of use]]></title>
        <pubdate>2025-02-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>V. Micheletto</author><author>S. Accardi</author><author>A. Fici</author><author>F. Piccoli</author><author>C. Rossi</author><author>M. Bilucaglia</author><author>V. Russo</author><author>M. Zito</author>
        <description><![CDATA[Virtual Try-On cosmetics apps based on Augmented Reality (AR) technology can improve both consumer product evaluation and purchase decisions, while also supporting companies’ marketing strategies. This study explores the factors influencing the use of AR-based cosmetics apps by administering the Technology Acceptance Model (TAM) and additional scales on a sample of 634 Italian consumers. Perceived Informativeness (PI) and Perceived Ease of Use (PEOU) were hypothesized as predictors of TRUST, DOUBT, Makeup Involvement (MI), Perceived Diagnosticity (PD), and Behavioral Intention (BI), with Perceived Enjoyment (PE) acting as a mediating variable. The structural equation model (SEM) confirmed PI as a strong predictor, with PE serving as a key mediator. The findings suggest that a moderate level of PE and PEOU is ideal - excessive simplicity or playfulness increases DOUBT and decreases TRUST. Both PD and BI are positively affected by the AR experience, with their coexistence being crucial for effective app usage. Additionally, PI, mediated by PE, significantly influences BI, emphasizing the role of information in consumer decision-making. These results provide valuable insights for the cosmetics industry, offering guidance to refine user experiences and enhance consumer engagement and satisfaction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1499830</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1499830</link>
        <title><![CDATA[Enhancing augmented reality with machine learning for hands-on origami training]]></title>
        <pubdate>2025-01-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mikołaj Łysakowski</author><author>Jakub Gapsa</author><author>Chenxu Lyu</author><author>Thomas Bohné</author><author>Sławomir Konrad Tadeja</author><author>Piotr Skrzypczyński</author>
        <description><![CDATA[This research explores integrating augmented reality (AR) with machine learning (ML) to enhance hands-on skill acquisition through origami folding. We developed an AR system using the YOLOv8 model to provide real-time feedback and automatic validation of each folding step, offering step-by-step guidance to users. A novel approach to training dataset preparation was introduced, which improves the accuracy of detecting and assessing origami folding stages. In a formative user study involving 16 participants tasked with folding multiple origami models, the results revealed that while the ML-driven feedback increased task completion times, it also made participants feel more confident throughout the folding process. However, they also reported that the feedback system added cognitive load, slowing their progress, though it provided valuable guidance. These findings suggest that while ML-supported AR systems can enhance the user experience, further optimization is required to streamline the feedback process and improve efficiency in complex manual tasks.]]></description>
      </item>
      </channel>
    </rss>