Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Virtual Real., 03 December 2025

Sec. Augmented Reality

Volume 6 - 2025 | https://doi.org/10.3389/frvir.2025.1710161

This article is part of the Research TopicHuman Factors and Design in Immersive and Generative Media TechnologiesView all 4 articles

Evaluating interaction design and user experience in augmented reality: a systematic review

  • Department of Industrial Engineering, University of Central Florida, Orlando, FL, United States

Background: Augmented Reality (AR) technologies are rapidly advancing, offering new opportunities for interactive and immersive user experiences. However, the success of AR applications depends significantly on thoughtful interaction design and robust evaluation of user experience (UX). While conventional WIMP (Windows, Icons, Menus, Pointer) interfaces have dominated interface design, they present notable limitations in spatial, embodied environments like AR.

Objectives: The main purpose of the current paper is to systematically review the state of AR interaction design and UX evaluation, with particular focus on the use of natural versus WIMP-based interaction paradigms. This review aims to assess how different interaction methods are implemented and evaluated, identify underexplored areas, and offer recommendations to guide future AR research and development. Methods: In this systematic review, Compendex, Web of Science, ScienceDirect, ACM Digital, IEEE, and Springer Computer Science were systematically queried for journal articles in order to explore the relationship between interaction design and user experience in AR. Following PRISMA guidelines, 86 peer-reviewed journal articles published between 2013 and 2024 were included based on predefined inclusion and exclusion criteria. Data were extracted and analyzed in terms of context of use, device types, interaction methods, and UX evaluation strategies.

Results: The findings show that natural interactions, such as gesture, voice, and gaze, are increasingly favored in AR research due to their alignment with spatial and embodied interaction needs. Hybrid systems combining natural and WIMP elements were the most common, with natural components driving the experiential benefits. UX evaluation in AR remains heavily reliant on self-reported measures, with questionnaires like SUS and NASA-TLX dominating. Objective and physiological assessments were rarely used. Usability and cognitive load were the most frequently evaluated UX aspects, while immersive, social, and emotional dimensions remain significantly underexplored. Head-worn displays (HWDs), particularly HoloLens 2, were the most studied devices, although mobile platforms also played a major role in accessible AR design.

Conclusion: This review provides insight into how UX is being considered in AR system development and highlights key trends, strengths, and gaps in current research. It underscores the need for more diverse evaluation methods and a broader focus on underrepresented experiential dimensions. By adopting mixed-method approaches and prioritizing user-centered, context-aware interaction paradigms, future AR systems can become more intuitive, inclusive, and effective across a range of application domains.

1 Introduction

Augmented Reality (AR) technologies have experienced significant advancements over the past several decades. Head-worn display (HWD) systems have progressed from tethered devices with three degrees of freedom (3 DoF) and limited tracking capabilities to high-resolution, untethered solutions that support six degrees of freedom (6 DoF) along with comprehensive tracking of eye, head, hand, and body movements (Stanney et al., 2024). These advancements, bolstered by progress in mobile internet, artificial intelligence (AI), machine learning (ML), and computing bandwidth, hold the potential to revolutionize the manner in which users interact with digital environments (Stanney et al., 2024). However, conventional user interfaces, such as those adhering to the WIMP (windows, icons, menus, pointing device) paradigm, exhibit inherent constraints within AR experiences, particularly when contrasted with more intuitive and natural interaction methods.

1.1 Interaction design in AR

WIMP interfaces are fundamentally geared towards two-dimensional (2D) interactions (i.e., screen interactions via keyboard and mouse), which operate contrary to the immersive and three-dimensional (3D) nature of AR. In traditional graphical user interface (GUI) systems, user engagement is often constrained by the rigid structures of these interfaces, leading to a less immersive and interactive experience. WIMP interfaces rely heavily on symbolic representations that do not take full advantage of the spatial and interactive potential of AR environments (Li, 2024), such as allowing users to interact naturally and intuitively using gestures, eye gaze, voice input, haptic feedback, and even physiological data from wearable sensors. These interactions are more consistent with how humans behave in physical space, rendering traditional WIMP elements overly contrived, visually cluttering, and interactionally limiting (Stanney et al., 2024). For instance, Jin et al. (2022) reveal that natural user interfaces (NUIs), which allow for gesture-based and hand interactions, significantly enhance user presence and engagement in AR scenarios. In contrast, WIMP controls can detract from immersive experiences, as they force users into a detached interaction model that does not leverage the benefits of a 3D environment (Jin et al., 2022; Polvi et al., 2018).

Additionally, WIMP interfaces fail to respond dynamically to the context of use. AR environments demand ambient and spatially embedded paradigms that adapt to the user’s location, posture, and behavior in real time. Despite this need for adaptability, many AR systems still rely on static 2D windows and menus, a holdover from what Oren, (1990) described as the “incunabular stage”, where outdated design models persist inappropriately in new technological contexts. Unlike context-aware systems, that tailor UX using real-time data, WIMP interfaces remain fixed regardless of environment or user needs. Their one-size-fits-all logic clashes with AR’s demand for personalized, real-time adaptability. For example, in laparoscopic surgery, context-aware AR provides situational information, such as distances and real-time guidance, enhancing efficiency (Katić et al., 2013). By contrast, WIMPs lack this adaptability, potentially hindering performance when contextual cues are critical. The importance of context-awareness is further emphasize by Cao et al. (2021) who fund that effective context-aware AR enhances US by aligning information to the user’s environment and needs. WIMP interfaces, however, limit this responsiveness, constraining how systems can react to user actions or environmental changes.

Enhanced multi-sensory feedback is a further characteristic of modern AR systems and a critical factor in moving beyond the WIMP paradigm. Integrating sensory inputs such as touch and sound fosters more holistic and inclusive experiences. Li, (2024) indicates that incorporating audio feedback and non-visual interfaces can boost interactivity and usability, particularly for users with visual impairments. WIMP systems, by contrast, lack this depth of sensory engagement, restricting accessibility and diminishing usability for diverse populations. Multimodal approaches, such as integrating auditory feedback and haptic interactions, improve engagement and satisfaction (Jiboku and Obarayi, 2023; Jin et al., 2022), supporting inclusive, intuitive control. Introducing multimodality softens the rigidity of WIMP paradigms and enhances responsiveness to users’ senses and behaviors.

Adaptive interactions in AR systems promote direct, first-person engagement, but WIMP interfaces do not foster these types of interactions. Even when embedded in AR, WIMP systems often retain their original cognitive burdens by requiring users to map abstract functions (e.g., “click” or “drag”) onto spatial actions with no physical grounding, creating dissonance between expectation and behavior. In contrast, gestures (hand or body movements interpreted by the system) enable intuitive manipulation of virtual elements, improving information flow and supporting seamless task completion (Ong et al., 2020). These adaptive interfaces are especially critical in industrial settings where speed and accuracy are paramount (Kolla and Plapper, 2023).

The potential of tangible AR systems, which integrate physical objects with digital elements, further highlights WIMP limitations. Such systems allow users to manipulate virtual objects as though tangible, aligning closely with human cognitive and perceptual abilities (Ha and Woo, 2010). The tactile engagement provided by tangible AR enhances both naturalness and effectiveness, making experiences more comprehensible than flat WIMP interfaces. As Kiourexidou et al. (2024), Jiang et al. (2022), and Jin et al. (2022) note, WIMP interfaces can increase cognitive load by forcing users to navigate between physical and virtual contexts, while natural user interfaces employing gestures and movement foster presence and satisfaction. Attempts to “spatialize” WIMP components may provide temporary familiarity, but they fail to unlock the full cognitive, sensory, and spatial potentials of AR. Users must instead be offered natural interaction modes that mirror real-world behaviors (i.e., pointing, grabbing, walking) rather than being asked to click on 3D versions of buttons meant for 2D contexts.

3D systems inherently demand 3D interactions. While WIMP paradigms once offered a familiar entry point into AR, the time has come to transcend these legacy models. To harness AR’s full potential, interfaces must be natively spatial and adaptive, leveraging movement and perception to enhance engagement and learning. This shift underscores a broader transition in design toward innovative, human-centered paradigms aligned with the evolving capabilities of AR technology.

1.2 User experience

User experience (UX) describes the interactions and journeys an end-user passes through while using a product or service (Norman and Nielson, 1998). It is a multidimensional construct encompassing users’ perceptions, emotions, and behaviors as they engage with systems (Hassenzahl, 2010). Beyond functionality, UX, also sometimes known as design thinking, includes emotional responses, usability, accessibility, and overall satisfaction, reflecting how a system supports both practical and affective needs. Hassenzahl (2010) highlights the importance of understanding the user’s emotional and cognitive experiences during the whole design process, emphasizing that positive UXs are not solely determined by technical functionality but also by how the user feels and engages with the system. In AR, where engagement occurs in spatial, embodied contexts, superior UX requires more than efficient interfaces, it must foster presence, comfort, and emotional resonance to create seamless, empowering experiences that leverage the 3D nature of the environment (Norman and Nielson, 1998).

While usability is a critical aspect of overall UX, it represents only one dimension of the broader experience. Usability focuses on whether a system is easy to learn and use (Norman and Neilsen, 2012), whereas UX is broader concept that encompasses the entire experience of the user including their goals, expectations, and overall satisfaction. A user-centered design approach remains the most effective strategy for achieving optimal performance outcomes. Using this approach to address both functional and emotional dimensions of interaction and aligning with natural, intuitive behaviors, AR designers can create systems that are not only efficient and accessible but also engaging and fulfilling, ultimately enhancing overall system quality (Hassenzahl, 2010).

The primary purpose of this paper is to review studies in the literature that assess the UX of AR systems and to analyze the interaction design approaches currently being taken in this field of study. The aim is to analyze systems use of natural versus WIMP-based design paradigms, investigate how they approach studying UX, and inspire more effective AR interaction design. The remaining sections are organized as follows. Section 2 presents the research methodology and the criteria for selecting papers to study in the current paper. Section 3 provides the results of the literature search, study characteristics, and a general overview of the selected articles. Section 4 synthesizes the literature and provides an understanding of the current field of study. In Sections 5,6, limitations and future areas for research are discussed, followed by recommendations for developers in Section 7 and the paper’s conclusion in Section 8.

2 Methods

This systematic review was based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines (Liberati et al., 2009). The review was conducted using this protocol by specifying the research questions and the search strategy before conducting the search to reduce the effect of research expectations on the review. The established protocol was tested and scaled to ensure appropriate data were gathered. The protocol included two main features, developing research questions based on the objectives and determining the search strategy according to the research questions.

2.1 Research questions

Based on the objectives of this systematic review laid out in the introduction, the following research questions were devised for this literature review:

• RQ1: Which types of interaction techniques have been investigated in studies involving users of AR systems?

• RQ2: How do AR studies evaluate user experience across different interaction techniques?

• RQ3: What user experience challenges are reported for different AR interaction techniques?

• RQ4: What future directions and emerging trends are identified in research on AR interaction techniques?

2.2 Search strategy

The search strategy was designed to explore the search space properly and identify the relevant material with a thorough evaluation process. Current academic and industrial peer-reviewed journal article literature covering the intersection of interaction design and UX in AR were considered key sources for this systematic review. The bibliographic search was carried out during the exploration phase using Compendex, Web of Science, ScienceDirect, ACM Digital, IEEE, and Springer Computer Science. To meet the eligibility criteria surrounding the search space, articles must have been published in peer-reviewed journals with the following keyword combinations in the title, keywords, or abstract: (“augmented reality” OR “mixed reality” OR “extended reality”) AND (“user experience” OR usability OR “user interface” OR “user centered” OR “user-centered”) AND (“interaction” OR “interaction type”) AND (assessment OR measurement OR evaluation). These criteria narrowed the focus of the review to identify the literature addressing the research questions.

2.3 Eligibility criteria

Published original articles with the following features were included in the current study: (a) be published in a peer-reviewed journal between 2013 and 2024; (b) be applied to AR or MR systems; (c) identify, describe, or use empirical methods to quantify and/or compare UXs; (d) describe in reasonable detail one or more interaction types with mobile or HWD AR; (e) be written in English. Other exclusion criteria were: (a) book chapters; (b) papers that, upon review, were not related to the research questions; (c) conference papers; (d) opinions, websites, and editorials; (e) previous systematic literature reviews. The authors (CLH and WK) independently inspected the titles and abstracts to find the relevant papers based on the inclusion and exclusion criteria and resolved any discrepancies through discussion.

2.4 Literature search

PRISMA guidelines (Liberati et al., 2009) were followed for this systematic literature review. A summary of the identification, inspection, and selection of studies for inclusion in this review is presented in Figure 1. In the first step, 485 papers were identified. Next, 345 papers remained after removing duplicates. In the third step, relevant scientific articles were selected from the remaining 345 papers using a formal abstract screening process that incorporated predetermined inclusion and exclusion criteria. Inclusion criteria at this step required the research to: (a) be published in a peer-reviewed journal between 2013 and 2024; (b) be applied to AR or MR systems; (c) identify, describe, or use empirical methods to quantify and/or compare UXs; (d) describe in reasonable detail one or more interaction types with mobile or HWD AR; (e) be written in English. Other exclusion criteria were: (a) book chapters; (b) papers that, upon review, were not related to the research questions; (c) conference papers; (d) opinions, websites, and editorials; (e) previous systematic literature reviews. Applying the inclusion and exclusion criteria at this step yielded 198 eligible articles (roughly 57% of the original papers). In the fourth step, the full texts of these 198 articles were studied to confirm that they met the same criteria as the third step. After the fourth step, 86 publications remained for review.

Figure 1
Flowchart for identifying new studies via databases and registers. Initially, 485 records were identified, with 140 removed as duplicates. After screening 345 records, 147 were excluded. Out of 198 assessed reports, multiple were excluded for various reasons, resulting in 86 studies included in review.

Figure 1. Flow diagram of PRISMA methodology and selection processes used in this review (Liberati et al., 2009) Note: Image generated using PRISMA 2020 generator (Haddaway et al., 2022).

3 Results

As shown in Table 1, this systematic search identified 86 published peer-reviewed journal articles that assessed the UX within the context of AR systems. During the selection phase, it was noticed that while numerous academic and industrial papers reference both AR and UX, relatively few provided a sufficient description of the AR interactions to be evaluated in this review. Interestingly, there were a larger number of papers (43) than expected that evaluated an AR system that did not use either a HWD or mobile platform. Most of these papers utilized some type of external cameras and visualizations to create an augmented experience. For the 86 papers that were reviewed, the authors organized them according to their context of AR use, device type, interaction design methods, and UX assessment methods, and evaluated the results.

Table 1
www.frontiersin.org

Table 1. Included papers.

3.1 Publishing trends

The first evaluation was to identify publishing trends by categorizing papers according to publishing year. Of the 86 papers identified for this systematic literature review, 2 were published in 2013, 2 in 2014, 0 in 2015, 4 in 2016, 2 in 2017, 3 in 2018, 8 in 2019, 3 in 2020, 11 in 2021, 13 in 2022, 18 in 2023 and 20 in 2024 (see Figure 2).

Figure 2
Bar and line graph illustrating the number of articles from 2013 to 2024. Bars represent yearly articles, showing an increase over time, especially from 2020 onward. The orange line depicts the cumulative total, also rising steadily and sharply increasing from 2020 to 2024.

Figure 2. Included papers per year.

This publication trend demonstrates a clear evolution in scholarly interest over time. As illustrated in Figure 2, the early years of research in this domain, specifically from 2013 to 2015, were characterized by a relatively low publication volume, averaging only one to two papers annually. This period reflects the early stages of AR technology, during which applications were limited, and the integration of UX considerations was still emerging. The modest increase in publications from 2016 through 2018, including a brief peak in 2016, suggests growing but still inconsistent academic engagement with the topic, possibly due to evolving technical capabilities and fluctuating access to development platforms.

A notable inflection point occurred in 2019, marking the beginning of a substantial acceleration in research output. From that year forward, the number of articles published annually rose sharply, with a particularly steep increase observed between 2021 and 2024. The year 2024 alone saw 20 publications, the highest within the study period even though this literature search was conducted in mid-2024 and not all articles may have been published by this time. This surge aligns with broader technological advancements, such as the mainstream availability of AR development toolkits (e.g., Apple’s ARKit and Google’s ARCore) and the release of commercially viable AR headsets (e.g., Microsoft HoloLens and Magic Leap). These developments likely lowered the barrier to entry for conducting rigorous user-centered AR research, resulting in an influx of studies exploring various aspects of user interaction and design usability.

The cumulative publication trajectory further underscores the growing momentum in the field. While cumulative growth was gradual through 2018, it accelerated markedly from 2019 onward, resulting in a steep curve that approaches nearly 90 total publications by 2024. This pattern suggests that AR UX has transitioned from a niche area into a mature and widely explored research focus. The growth is likely fueled not only by technological improvements but also by increased interdisciplinary interest, with contributions from human-computer interaction, psychology, cognitive science, and industrial design scholars.

Taken together, these trends reflect the maturation of the field and an increasing recognition of the importance of effective interaction design in the development and deployment of AR technologies. The exponential rise in scholarly output over the past 5 years points to a sustained and growing commitment to enhancing UX in AR systems. As AR applications continue to expand across sectors such as healthcare, education, and manufacturing, it is expected that research into user interaction and design best practices will remain a vital and expanding area of inquiry.

3.2 Context of AR use

Following the evaluation by publication year, the papers in this review were categorized and evaluated based on the context in which the AR systems were used. Figure 3 shows the breakdown by context of use.

Figure 3
Bar chart showing the number of articles in various contexts of augmented reality use. The highest count is for

Figure 3. Number of articles by context of AR use.

The distribution of contexts in which AR was applied reveals several important patterns regarding the focus of research and development in this domain. The most interesting finding was that in 25.6% (n = 22) articles no specific context could be identified. This suggests a substantial number of studies were either exploratory in nature or aimed at developing generalized frameworks, tools, or usability evaluations not tied to a particular application domain. This prevalence indicates that foundational research in AR interaction design and UX often occurs outside of clearly defined use cases, possibly to maintain flexibility across multiple sectors.

Among domain-specific applications, the medical field stands out prominently, with 12.8% (n = 11) articles dedicated to its use, making it the most common specific context of AR deployment. This finding reflects the increasing integration of AR into healthcare for purposes such as surgery assistance, medical training, and patient education. Following this, human-robot collaboration was addressed in six studies, underscoring the role of AR in facilitating intuitive and efficient interactions between humans and autonomous systems which is an area of growing relevance in smart manufacturing and robotics.

Other notable areas include museum exploration, manufacturing, and education, each with five publications. These domains likely benefit from AR’s capacity to provide layered, interactive information that enhances learning and operational effectiveness. The presence of tourism (n = 4), training and social interaction (n = 3 each), and games and data visualization (n = 2 each) demonstrates the broad interest in AR’s potential to enrich engagement, understanding, and collaboration across both recreational and professional contexts. Most of the remaining categories, including aerospace, real-estate marketing, sports, and crime scene investigation, had one publication each. This wide but shallow representation indicates that while AR is being considered across a diverse array of settings, deeper investigation and broader deployment remain limited to a smaller subset of application areas.

Overall, the context of use investigation highlights a dual trend: a strong foundational interest in generalizable AR design principles, and a targeted focus on domains like healthcare and robotics where AR’s practical benefits are increasingly being realized. As technology matures and more specialized use cases emerge, it is likely that the distribution of AR research will become more evenly spread across domains, with further growth in contexts currently underrepresented.

3.3 Device evaluation

The analysis of device types used in studies on augmented reality (AR) interaction design and user experience reveals insightful trends in technological preferences and research contexts. As shown in Figure 4, the majority of studies (51%) employed head-worn displays (HWDs), followed closely by mobile devices (44%), while a smaller portion (5%) utilized both HWDs and mobile platforms. This distribution highlights a balanced interest in both immersive and accessible AR solutions, with a slight emphasis on more immersive HWD systems.

Figure 4
Pie chart illustrating device usage distribution. HWD constitutes fifty-one percent, painted in blue. Mobile accounts for forty-four percent, in orange. HWD and Mobile share five percent, in green.

Figure 4. Articles by devices evaluated.

Within the HWD category, the Microsoft HoloLens two emerged as the most used device, featured in 23 studies. This prevalence is indicative of HoloLens 2’s technical maturity, robust tracking capabilities, and growing popularity in both academic and industry settings. It was followed by the original HoloLens 1 (n = 12), suggesting that prior to newer models being available, the first-generation device served as a relevant research tool. Other HWDs such as the Varjo-X3, Epson BT-200 OST-HMD, and Oculus Rift appeared infrequently, showing a more limited footprint in the literature. This disparity points to a consolidation around the HoloLens product line for head-worn AR research, likely due to its standalone capabilities and well-supported development environment.

In contrast, the mobile category showcased a different pattern. Android devices were used in 14 studies, significantly outpacing iOS devices (n = 5) and those that employed both platforms (n = 3). A notable portion of studies (n = 16) in this category did not specify the exact operating system or device used, which may reflect either platform-agnostic design or insufficient reporting. The dominance of Android may be attributed to its wider variety of hardware options and more open development framework, which can be advantageous in experimental settings.

The smallest category, studies using both HWD and mobile platforms, was represented by just a few instances. For example, some studies employed combinations such as HoloLens 2 with Android, or with both Android and iOS, reflecting efforts to compare or integrate different AR modalities. However, the low frequency of these hybrid configurations suggests that multi-platform AR deployments remain a niche area of research.

Overall, the data suggest that while both HWD and mobile platforms play substantial roles in AR UX research, there is a growing preference for HWDs, particularly the HoloLens 2, as researchers seek more immersive and spatially rich interactions. The sustained presence of mobile platforms underscores the importance of accessibility and real-world deployment, particularly in applications where portability and ease of use are critical. The limited adoption of dual-platform approaches indicates a potential area for future exploration, especially as cross-device AR experiences become more technically feasible.

3.4 Evaluation of interaction methods

Of the 86 articles reviewed, 77 of them evaluated the UX of a particular AR system. These 77 systems were evaluated using the taxonomy in Table 2 to provide insight into how different interaction methods support UX. Findings suggest that natural interactions offer important advantages over traditional WIMP interfaces. As shown in Figure 5, while the majority of systems (n = 41) utilized a combination of natural and WIMP-based interactions, those employing natural interactions exclusively (n = 17) demonstrated broader support for a range of user-centered features than systems relying solely on WIMPs (n = 20). This pattern indicates a growing recognition of the potential for natural interactions to enhance usability, engagement, and realism in AR environments.

Table 2
www.frontiersin.org

Table 2. Taxonomy for evaluation of interaction methods.

Figure 5
Bar chart comparing the number of systems evaluated for interactions: Combination (approximately 40 systems), WIMPs (around 20 systems), and Natural (approximately 15 systems).

Figure 5. Interaction methods evaluated.

When examining interaction characteristics, natural systems consistently supported a wider and more meaningful range of experiential dimensions than WIMP-only systems, suggesting that natural interactions offer superior alignment with the embodied and spatial nature of AR environments. Systems using natural interactions were particularly strong in supporting context-aware interactions (n = 12), direct manipulation (n = 10), and embodiment (n = 8). These features are central to fostering intuitive, real-world-like UXs. Natural interfaces also showed higher support for immersion, presence, and sensorial richness, all of which are essential for sustained engagement and effective task performance in AR. Notably, natural systems were the only category that incorporated conversational interactions, highlighting their ability to accommodate fluid, speech-based input modalities that enhance accessibility and user satisfaction.

In contrast, WIMP-only systems, though historically foundational in user interface design, demonstrated clear limitations within AR contexts. Their support for key experiential features such as immersion (n = 2), presence (n = 2), and identity construction (n = 1) was sparse. Most WIMP systems favored structured, static forms of interaction, with minimal capacity for adaptability or mindfulness. Their lack of support for adaptive, mindful, and conversational interactions (all with frequencies of zero) suggests that WIMPs are ill-suited for dynamic, embodied AR environments where responsiveness and human-centered flexibility are paramount.

Although combination systems outperformed both individual types in terms of evaluated features, it is important to recognize that their success is largely driven by the incorporation of natural interaction techniques. The benefits observed in hybrid systems, such as support for embodiment (n = 13), presence (n = 11), and context awareness (n = 15), reinforce the essential contributions of natural input modalities. These findings strongly suggest that the strengths of combination systems derive not from the WIMP components, but from the augmentation of those elements with naturalistic gestures, speech, and environmental responsiveness.

In addition to the 77 articles that focused on evaluating a specific AR system, an additional 9 articles conducted direct comparisons of interaction methods, offering valuable insights into how different interface designs influence UX and task performance. These comparative studies represent an important contribution to the field, as they go beyond isolated system assessments to explore relative strengths and weaknesses across interaction paradigms. By examining variables such as usability, user satisfaction, task efficiency, and cognitive load, these studies provide a more nuanced understanding of how interaction styles perform under similar conditions. Although smaller in number, these articles help fill a critical gap in the literature by empirically testing assumptions about interface effectiveness and supporting evidence-based recommendations for AR design. Their findings are particularly useful for guiding future development of AR applications that aim to optimize user engagement, accessibility, and contextual responsiveness. This aligns with Picardi and Caruso (2024) who highlight in their review that although hundreds of AR studies exist, formal, user-centered evaluations are inconsistently applied and lack standardized methodologies.

Overall, the findings support the view that natural interactions offer notable advantages in AR system design, particularly in supporting rich, context-sensitive, and user-centered experiences. While WIMPs remain useful in certain contexts, the affordances of natural interactions suggest they are well-suited to the unique demands of AR environments. As AR technology continues to mature, further research and development in this area may help refine best practices for leveraging natural input in a wide range of applications.

3.5 Interaction types implemented

Building on the previous analysis of interaction methods in AR systems, the distribution of interaction types implemented across the reviewed literature provides additional support for the growing prominence and versatility of natural interactions. As shown in Figure 6, gesture-based interactions were the most frequently implemented, utilized by over 40 systems, closely followed by touch interactions. These findings align with the earlier conclusion that natural interaction modalities are not only conceptually favored but are also being put into practice in some AR systems.

Figure 6
Bar chart displaying various interaction types and the number of systems utilizing them. Gesture and Touch are the highest at 47 systems, followed by Voice Commands and Direct Manipulation at 22. Other types include Gaze, Controller, Direct Button Press, and more, with decreasing usage down to Brain Computer Interface with minimal usage.

Figure 6. Interaction types used across systems.

Other high-frequency interaction types include voice commands, direct manipulation, and gaze-based input, all of which are hallmarks of natural interfaces. These modalities enable more immersive, fluid, and contextually responsive experiences by minimizing abstraction layers between the user and the digital content. For example, voice commands allow users to interact hands-free, while gaze tracking supports attentional alignment and context awareness both of which are critical for maintaining a seamless UX in spatial environments. The substantial presence of these modalities reinforces the argument that natural interactions are increasingly utilized in AR interaction design.

In contrast, more traditional or constrained interaction methods, such as controller input, button presses, and slider or tilt mechanisms, appear less frequently. These are typically associated with WIMP or hybrid systems and often lack the immersive qualities that define effective AR engagement. Furthermore, the low implementation rates for modalities such as haptic feedback, physical markers, and brain-computer interfaces reflect either current technical limitations or the early-stage exploration of these approaches. Their rarity in the dataset suggests that, while innovative, such methods are not yet mainstream in AR research or application.

Interestingly, several systems incorporated multiple interaction types, a pattern that mirrors the earlier finding that combination interaction methods (natural + WIMPs) were most prevalent overall. However, the consistent dominance of natural modalities across this distribution highlights their foundational role in AR system design. Even in hybrid systems, it is often the natural components, such as gestures, voice, or gaze, that drive the core interaction experience.

3.6 UX evaluation methods and aspects of UX studied

An analysis of the UX evaluation methods and UX aspects studied in the AR literature reveals a strong reliance on subjective self-report measures, with limited incorporation of objective or physiological assessment tools. As illustrated in Figure 7, questionnaires were the dominant UX evaluation method, used in the overwhelming majority of studies (n = 85). This trend reflects the common preference for standardized, scalable tools such as the System Usability Scale (SUS), NASA-TLX, and custom surveys to assess user perceptions of usability, workload, and satisfaction. Following questionnaires, task performance was the next most frequently used method, appearing in 47 studies. This suggests that many researchers aimed to pair subjective ratings with performance-based metrics to evaluate the functional effectiveness of AR systems.

Figure 7
Bar chart showing UX evaluation methods by the number of articles used. Questionnaires and task performance are the most used, with around 90 and 80 articles, respectively. Other methods include qualitative feedback, interviews, and observation, with fewer articles.

Figure 7. UX evaluation methods used.

In contrast, more qualitative and observational methods were used much less frequently. Interviews, qualitative feedback, and general observation appeared in fewer than 10 studies each, suggesting that in-depth, open-ended data collection remains underutilized in the field. Similarly, methods such as eye tracking, think-aloud protocols, and behavioral observation, which can yield rich insights into cognitive load, attention, and interaction behavior, were rarely implemented. The same holds true for physiological and system-level measures such as EEG, system accuracy, and pre-post testing, each of which was used in only one or two studies. These findings suggest that while AR UX research often seeks to quantify user attitudes and task outcomes, it has not yet fully embraced the range of tools available for evaluating the depth and nuance of user experience.

Figure 8, which categorizes the specific UX aspects studied, further highlights the focus of current evaluation practices. Usability (n = 63) was by far the most commonly assessed construct, followed by cognitive load (n = 24), overall experience (n = 14), efficiency (n = 14), and engagement (n = 10). This emphasis reflects a foundational concern in AR research with ensuring that systems are intuitive, minimally taxing, and well-received by users. Secondary aspects such as acceptance (n = 9), learnability (n = 7), task accuracy (n = 5), and satisfaction (n = 5) were also frequently studied, reinforcing the prioritization of pragmatic and experiential usability goals in AR design.

Figure 8
Bar chart illustrating the number of articles studied for various user experience (UX) aspects. Usability has the highest with nearly 70 articles, followed by cognitive load and overall experience. Other aspects like engagement, acceptance, and immersion have fewer studies.

Figure 8. Aspects of UX studied.

However, many other dimensions of UX were evaluated far less frequently. Aspects such as immersion, perception, fatigue, and cybersickness, which are often central to immersive technology experiences, were studied in only a handful of articles (n = 3 each). Similarly, socially oriented UX factors like spatial interaction, social interaction, collaboration, and trust received very limited attention. The same was true for aesthetic and affective qualities such as visual performance, visual aesthetics, attractiveness, and comfort. These gaps suggest that while foundational UX concepts are well-represented in AR research, many of the experiential, social, and emotional aspects of UX remain underexplored.

Overall, the data show that AR UX research has thus far relied heavily on questionnaires and task-based evaluations, with strong emphasis on usability and cognitive load. While these measures are essential for establishing baseline system quality, there is substantial opportunity to expand the methodological toolkit and explore a broader spectrum of UX dimensions. Future research would benefit from incorporating more mixed-method approaches, including physiological, behavioral, and qualitative evaluations, to fully capture the richness and complexity of AR UXs.

3.7 Quantitative comparison of usability across interaction paradigms

A quantitative synthesis was conducted using SUS scores reported across the included studies. Studies were categorized by interaction paradigm, WIMP-based, natural, or combination systems, and weighted means were calculated according to sample size. Pooled standard deviations were derived to account for within-category variability. Results across the 22 studies that reported SUS scores indicate that natural interaction systems achieved the highest average usability ratings (M = 76.56, SD = 12.33, k = 5), followed by WIMP-based systems (M = 73.70, SD = 13.09, k = 7), while combination systems yielded lower and more variable usability scores (M = 70.89, SD = 14.34, k = 10). The greater dispersion among combination systems suggests inconsistency in UX across hybrid designs, likely reflecting their diverse implementation contexts and interface configurations. Overall, this trend supports the observation that natural interaction paradigms, such as gesture-, voice-, and movement-based controls, tend to produce higher perceived usability in AR applications than more WIMP-based or hybrid approaches.

Although several studies reported workload or performance-related outcomes, the data were insufficient to support a meaningful quantitative synthesis. Only a small subset of papers employed standardized workload assessments such as the NASA-TLX, and reporting practices varied widely across studies. In most cases, subscale scores were presented without corresponding standard deviations or sample sizes, preventing reliable aggregation. Other workload or performance measures such as task completion time and error rate were also reported too inconsistently for quantitative comparison.

4 Discussion

In the field of AR, foundational research plays a significant role, with the largest percentage of studies focusing on developing generalizable frameworks and usability tools rather than being tied to specific domains (Ballestin et al., 2021; Börsting et al., 2022; Caputo et al., 2024; Han et al., 2023; Hussain et al., 2023; Khurana et al., 2023; M. Kim and Lee, 2016; Knierim et al., 2021; Kyaw et al., 2023; Lima and Hwang, 2024; J. Y. Oh et al., 2019; Özacar et al., 2016; Park and Moon, 2013; Pfeuffer et al., 2021; Ren et al., 2024; Ro et al., 2019; Shen et al., 2022; Shi et al., 2023; Tang et al., 2022; Yin et al., 2019; Zhang X et al., 2024; Zhou et al., 2023). This exploratory approach underscores the emphasis on creating adaptable solutions applicable across various contexts. Conversely, application-specific research has predominantly centered on the medical field (Cidota et al., 2016; Condino et al., 2018; De Marsico et al., 2014; De Paolis et al., 2022; J. C. Kim et al., 2023; Laine and Suk, 2016; Lam et al., 2021; Mittmann et al., 2022; Moglia et al., 2024; Negrão and Maciel, 2024; Sun et al., 2024). Other notable domains include human-robot collaboration (Chan et al., 2022; Chu and Liu, 2023; Frank et al., 2017; Hietanen et al., 2020; Sprute et al., 2019; Widiyanti et al., 2024), manufacturing (Cui et al., 2024; Feng et al., 2023; Grodotzki et al., 2023; Sanna et al., 2022; Xue et al., 2024), museum exploration (W. Chen et al., 2021; Gan et al., 2023; Jin et al., 2023; Neamțu et al., 2024; Yi and Kim, 2021), and education (S. Oh et al., 2018; Rebollo et al., 2022; Rossano et al., 2020; Sorrentino and Spano, 2019; Villanueva et al., 2022), illustrating AR’s capacity to enhance precision, training efficiency, and interactive learning experiences. Despite the diversity of settings explored, such as tourism (Koo et al., 2020; Matviienko et al., 2022; Shih et al., 2019; Silva et al., 2023), sports (Tesařová et al., 2023), and real estate (Macedo et al., 2014), many of these domains are represented by only one or a handful of studies, indicating initial interest but limited deployment beyond healthcare and robotics. This trend highlights the broad yet shallow spread of AR research across industries, suggesting a need for deeper, domain-specific investigations to fully harness the potential of AR technologies.

4.1 Natural vs. WIMP interactions

Natural interaction modalities, such as gestures, voice commands, and gaze-based controls, have demonstrated considerable potential in enhancing UX within AR systems. As discussed earlier, these modalities offer significant advantages over traditional WIMP interfaces by facilitating context-aware and immersive experiences. Notably, hybrid systems combining natural and WIMP interactions have emerged as the most prevalent approach as seen in (Albeedan et al., 2024; Caputo et al., 2024; Cejka et al., 2021; Chen et al., 2024; Chen et al., 2021; Cidota et al., 2016; Condino et al., 2018; Cui et al., 2024; Gan et al., 2023; Han et al., 2023; Ismael et al., 2024; Joseph Dube and İnce, 2019; Khurana et al., 2023; J. C. Kim et al., 2024; Knierim et al., 2021; Kyaw et al., 2023; Laine and Suk, 2016; W.-C. Li et al., 2022; Mahroo et al., 2023; Mendoza et al., 2021; Miyazaki and Komuro, 2021; Neamțu et al., 2024; Oh et al., 2019; Oh et al., 2018; Park and Moon, 2013; Pfeuffer et al., 2021; Rebollo et al., 2022; Ren et al., 2024; Ro et al., 2019; Rossano et al., 2020; Sanna et al., 2022; Sekhavat, 2016; Shafana and Silpasuwanchai, 2023; Silva et al., 2023; Sprute et al., 2019; Vargas González et al., 2023; Wild et al., 2021; Xue et al., 2024; Yin et al., 2019; Zhao et al., 2023). Utilizing a combination of natural interfaces with the more familiar WIMP-based designs may be an effective way to maximize usability and engagement. Dünser and Billinghurst, (2011) noted that since many AR systems are evaluated as one-off prototypes without shared frameworks, it can complicate usability comparisons and thus underscores the value of consistent hybrid interface strategies. This may indicate that the integration of natural interaction elements in AR systems will be pivotal in addressing the dynamic demands of AR contexts. Despite the promising advancements in natural interaction technologies, certain modalities remain underexplored. Haptics (Villanueva et al., 2022), physical markers (Laine and Suk, 2016), and brain-computer interfaces (Sanna et al., 2022), while seen in this review, have yet to see widespread application, possibly due to technical challenges or adoption barriers. Furthermore, systems relying exclusively on WIMP interactions such as (Blattgerste et al., 2021; Campos-López et al., 2023; De Paolis et al., 2022; Helin et al., 2018; Hietanen et al., 2020; J. C. Kim et al., 2023; Koo et al., 2020; Lam et al., 2021; Lima and Hwang, 2024; Macedo et al., 2014; Matviienko et al., 2022; Miranda et al., 2022; Mittmann et al., 2022; Moglia et al., 2024; Negrão and Maciel, 2024; Shen et al., 2022; Shi et al., 2023; Sorrentino and Spano, 2019; Tesařová et al., 2023; Widiyanti et al., 2024) lack the adaptability required for fully immersive AR experiences, highlighting the limitations of conventional interface designs when used alone. This aligns with findings by Dünser and Billinghurst, (2011), who emphasize that WIMP-inspired heuristics inadequately capture the needs of AR systems, especially in supporting 3D object manipulation, multimodal feedback, and spatial navigation. Comparative studies such as (Hussain et al., 2023; Özacar et al., 2016; Sun et al., 2024; Swaminathan et al., 2013; Yoon et al., 2018; Zhang J et al., 2024; Zhang X et al., 2024; Zhang et al., 2023; Zhou et al., 2023) have provided valuable insights by directly evaluating interaction methods, thereby guiding future AR development with evidence-based recommendations. Moving forward, AR research must continue to explore innovative and underutilized modalities while leveraging the strengths of both natural and hybrid systems. By expanding the scope of comparative studies and addressing existing barriers to adoption, AR designers and researchers can enhance the depth and richness of user interactions, ultimately contributing to more intuitive and effective AR environments.

Although this review highlights the growing preference for natural interactions in AR, the findings also suggest that WIMP components continue to play a meaningful role in hybrid designs despite their drawbacks. These elements can provide structured interaction logic and precise control mechanisms that are sometimes difficult to achieve with gesture or voice input alone. In complex, data-intensive, or collaborative applications, such as engineering visualization, air traffic control, or medical imaging, WIMP elements may support increased accuracy, repeatability, and error recovery. Moreover, the familiarity of WIMP interfaces can help bridge usability gaps for novice users or non-expert populations, facilitating smoother onboarding into AR environments. As reported by studies such as Chen et al. (2024) and Negrão and Maciel (2024), mixed designs combining 2D control panels with spatialized feedback can improve task efficiency without undermining immersion.

4.2 UX evaluation in AR

The current state of UX evaluation in AR systems is characterized by the dominance of self-report methods, with questionnaires such as the System Usability Scale (SUS) (Blattgerste et al., 2021; Börsting et al., 2022; Campos-López et al., 2023; Caputo et al., 2024; Chan et al., 2022; Chen et al., 2021; Chu and Liu, 2023; Cidota et al., 2016; Feng et al., 2023; García-Pereira et al., 2020; Han et al., 2023; Helin et al., 2018; Lam et al., 2021; Madeira et al., 2022; Mahroo et al., 2023; Neamțu et al., 2024; Ro et al., 2019; Sanna et al., 2022; Shi et al., 2023; Silva et al., 2023; Vargas González et al., 2023; Widiyanti et al., 2024; Wild et al., 2021; Xue et al., 2024; Zhang X et al., 2024; Zhang et al., 2023) and NASA-TLX (Cejka et al., 2021; Chan et al., 2022; K. Chen et al., 2024; Feng et al., 2023; Frank et al., 2017; Knierim et al., 2021; W.-C. Li et al., 2022; Miyazaki and Komuro, 2021; Negrão and Maciel, 2024; Özacar et al., 2016; Ren et al., 2024; Ro et al., 2019; Rossano et al., 2020; Sanna et al., 2022; Shen et al., 2022; Shi et al., 2023; Vargas González et al., 2023; Xue et al., 2024; Yoon et al., 2018; Zhang J et al., 2024; Zhang et al., 2023) being utilized in a significant majority of studies. These tools provide scalable and standardized means for assessing UX, yet there is a noticeable gap in the adoption of objective, behavioral, and physiological methods. Dhir and Al-Kahtani (2013) demonstrated the value of a multidimensional UX evaluation in AR by integrating self-report tools with affective and expectation-based methods during mobile AR prototype testing. Their triangulated approach provided deeper insights into user perceptions and design suitability across stages of interaction. Additionally, Gutierrez et al. (2022) proposed a structured framework of 94 quality attributes, organized into objective and subjective UX criteria, to improve consistency in AR evaluation across application domains. So while techniques such as eye tracking (Chen et al., 2021; Chu and Liu, 2023), EEG (Sun et al., 2024), and system accuracy metrics (Condino et al., 2018) were seen in this review, they were comparatively rare, which indicates that the multidimensional analysis of UX within AR environments is still limited. Moreover, several studies created custom surveys to gather data but did not proceed to validate their reliability or effectiveness (Caputo et al., 2024; Chen et al., 2024; W. Chen et al., 2021; Chu and Liu, 2023; Condino et al., 2018; Cui et al., 2024; De Marsico et al., 2014; De Paolis et al., 2022; Frank et al., 2017; Gan et al., 2023; García-Pereira et al., 2020; Han et al., 2023; Hietanen et al., 2020; Hussain et al., 2023; Joseph Dube and İnce, 2019; Khurana et al., 2023; M. Kim and Lee, 2016; Knierim et al., 2021; Koo et al., 2020; Kyaw et al., 2023; Laine and Suk, 2016; Macedo et al., 2014; Matviienko et al., 2022; Miranda et al., 2022; Moglia et al., 2024; Negrão and Maciel, 2024; Oh et al., 2018; Park and Moon, 2013; Pfeuffer et al., 2021; Rebollo et al., 2022; Shafana and Silpasuwanchai, 2023; Shen et al., 2022; Silva et al., 2023; Sorrentino and Spano, 2019; Sprute et al., 2019; Tang et al., 2022; Vargas González et al., 2023; Villanueva et al., 2022; Yi and Kim, 2021; Yin et al., 2019; Zhao et al., 2023). This gap highlights a missed opportunity to enhance the robustness and applicability of these instruments in AR research.

In addition to methodological preferences, the focus of UX evaluation in AR tends to concentrate narrowly on core aspects such as usability and cognitive load, reflected by their prevalence in research studies (63 for usability and 24 for cognitive load). Other experiential dimensions, including immersion (Cejka et al., 2021; Jin et al., 2023; Luo et al., 2023), fatigue (Caputo et al., 2024; Shafana and Silpasuwanchai, 2023; Shi et al., 2023), aesthetics (Yi and Kim, 2021), social interaction (Gan et al., 2023), and comfort (Miyazaki and Komuro, 2021), remain underexplored. This narrow focus restricts the ability to fully comprehend the depth and richness of user experiences in augmented reality systems.

Future evaluations would benefit from mixed method approaches that integrate qualitative insights alongside behavioral and physiological measures. Such comprehensive strategies could provide a more nuanced understanding of UX, capturing the complex interplay of functionality, user satisfaction, and engagement in AR systems. By expanding evaluation methods and addressing gaps in experiential dimensions, researchers can enhance the practical and theoretical development of augmented reality technologies.

5 Limitations

This systematic review may not include all papers of relevance to the intersection of interaction design and UX in AR systems. Even though the authors used a robust set of keywords, some articles may not have been identified due to the limited number of search terms. Additionally, no citation chaining of the selected papers was conducted. Citation chaining may have identified additional papers relevant to this review. Finally, a limited number of databases were utilized for records discovery and, thus, the search could be expanded.

6 Future research

Future investigations into AR should delve deeper into interaction methods that have not yet been fully explored, such as haptics, physical markers, and brain-computer interfaces, to address existing limitations in natural interaction systems. Expanding the range of comparative studies would offer valuable insights into optimizing interaction techniques and improving UX. Additionally, research should aim to diversify UX assessment methodologies by incorporating objective measures like eye-tracking, EEG, and system performance metrics alongside traditional self-reported approaches. Greater attention to underrepresented aspects of UX, such as immersion, aesthetics, social dynamics, and physical comfort, could provide a more comprehensive understanding of AR applications. By tackling these gaps and employing mixed-method strategies, the development of more immersive, user-friendly, and innovative AR solutions can be achieved.

7 Practical recommendations for advancing beyond WIMP

This review highlights the need for AR systems to move beyond traditional WIMP paradigms toward interaction models that are inherently spatial, embodied, and context-aware. While WIMP interfaces offer familiarity and functional precision, their 2D symbolic logic constrains immersion and limits the adaptive potential of AR environments. The transition toward embodied and multimodal interaction represents a necessary step in realizing AR’s full experiential and functional capabilities. However, as AR technologies continue to mature, some limited integration of WIMP elements may remain useful as a transitional scaffold, particularly in applications where precision, learnability, or legacy system compatibility are critical.

• Designers should prioritize creating natively 3D interfaces that encourage intuitive behaviors such as gesture, gaze, movement, and tangible manipulation (Jiang et al., 2022; Jin et al., 2023). Rather than reproducing flat icons or menu-driven panels in virtual space, interface design should leverage spatial affordances and contextual cues that align with natural human perception and action. Incorporating multimodal feedback, such as auditory or haptic responses, can further enhance accessibility, presence, and inclusivity for diverse user populations (Jiboku and Obarayi, 2023; W.-C. Li et al., 2022).

• Developers should prioritize adaptive, context-sensitive architectures capable of dynamically adjusting to user position, task, and environment (Cao et al., 2021; Katić et al., 2013). Tangible interfaces and multimodal input pipelines should replace static graphical elements, allowing direct manipulation of virtual objects through embodied action (Ha and Woo, 2010).

These recommendations advocate a decisive transition from legacy graphical conventions toward embodied, adaptive, and inclusive interaction paradigms, thus designing AR systems that reflect how humans naturally perceive, move, and act within 3D worlds.

8 Conclusion

This systematic review highlights a maturing field of research focused on interaction design and UX in AR systems. The findings reveal a growing consensus around the importance of natural interaction modalities, such as gesture, voice, and gaze, that align more closely with the spatial and embodied nature of AR environments. Compared to traditional WIMP interfaces, natural interactions support a broader range of experiential features, including immersion, context-awareness, and intuitive control, which are essential for effective engagement and task performance in AR settings.

The dominance of hybrid systems, which combine natural and WIMP-based interfaces, suggests that integrating familiar design elements with more adaptive and human-centered modalities can enhance usability and lower barriers to adoption, and that traditional WIMP elements may retain some situational value, particularly for supporting precision, learnability, and fallback functionality in complex or constrained AR environments. However, the core benefits of these hybrid systems are largely driven by their natural interaction components. Despite this trend toward utilizing more natural elements, several advanced modalities including haptics, physical markers, and brain-computer interfaces remain underutilized, indicating opportunities for future research and technological development.

In terms of UX assessment, the literature remains heavily reliant on subjective measures such as questionnaires and task performance metrics, while objective and physiological methods are significantly underrepresented. Although tools like the SUS and NASA-TLX are widely used, there is a notable lack of validation for custom instruments and limited use of complementary methods such as eye tracking or EEG. Moreover, current evaluation practices tend to emphasize usability and cognitive load, often neglecting deeper experiential, social, and emotional dimensions like comfort, aesthetics, collaboration, and presence.

Device usage patterns also reflect the state of the field: HWDs, especially the Microsoft HoloLens 2, dominate immersive AR research, while mobile devices remain prevalent in studies prioritizing accessibility and portability. Only a small portion of studies explored cross-platform AR applications, suggesting an untapped area for further exploration as AR ecosystems evolve.

Taken together, these findings underscore the need for more comprehensive, mixed method approaches to UX evaluation and more deliberate integration of natural, adaptive interaction paradigms. Designers of future AR systems must address these gaps to better accommodate the complexities of real-world use, improve inclusivity and engagement, and support the creation of seamless, responsive, and human-centered digital experiences.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

CH: Conceptualization, Formal Analysis, Methodology, Visualization, Writing – original draft, Writing – review and editing. WK: Conceptualization, Methodology, Supervision, Writing – review and editing.

Funding

The authors declare that no financial support was received for the research and/or publication of this article.

Acknowledgments

Acknowledgements

The authors express their gratitude to the researchers and practitioners in the field of augmented reality whose work provided valuable insights and inspiration for this review. We also acknowledge the contributions of the institutions and funding bodies that supported the studies cited. Special thanks are extended to collaborators and colleagues for their constructive feedback and guidance throughout the development of this manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The authors declare that no Generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Albeedan, M., Kolivanda, H., and Hammady, R. (2024). Designing and evaluation of a mixed reality system for crime scene investigation training: a hybrid approach. Virtual Real. 28 (3), 127. doi:10.1007/s10055-024-01018-8

CrossRef Full Text | Google Scholar

Ballestin, G., Chessa, M., and Solari, F. (2021). A registration framework for the comparison of video and optical see-through devices in interactive augmented reality. IEEE Access 9, 64828–64843. doi:10.1109/ACCESS.2021.3075780

CrossRef Full Text | Google Scholar

Blattgerste, J., Luksch, K., Lewa, C., and Pfeiffer, T. (2021). TrainAR: a scalable interaction concept and didactic framework for procedural trainings using handheld augmented reality. Multimodal Technol. Interact. 5 (7), 30. doi:10.3390/mti5070030

CrossRef Full Text | Google Scholar

Börsting, I., Karabulut, C., Fischer, B., and Gruhn, V. (2022). Design patterns for mobile augmented reality user interfaces—an incremental review. Information 13 (4), 159. doi:10.3390/info13040159

CrossRef Full Text | Google Scholar

Campos-López, R., Guerra, E., De Lara, J., Colantoni, A., and Garmendia, A. (2023). Model-driven engineering for augmented reality. J. Object Technol. 22 (2), 2–1. doi:10.5381/jot.2023.22.2.a7

CrossRef Full Text | Google Scholar

Cao, J., Liu, X., Su, X., Tarkoma, S., and Hui, P. (2021). “Context-aware augmented reality with 5G edge,” in 2021 IEEE global communications conference (GLOBECOM), 1–6. doi:10.1109/GLOBECOM46510.2021.9685498

CrossRef Full Text | Google Scholar

Caputo, A., Bartolomioli, R., Orso, V., Mingardi, M., Da Granaiola, L., Gamberini, L., et al. (2024). Comparison of deviceless methods for distant object manipulation in mixed reality. Comput. and Graph. 122, 103959. doi:10.1016/j.cag.2024.103959

CrossRef Full Text | Google Scholar

Cejka, J., Mangeruga, M., Bruno, F., Skarlatos, D., and Liarokapis, F. (2021). Evaluating the potential of augmented reality interfaces for exploring underwater historical sites. IEEE Access 9, 45017–45031. doi:10.1109/ACCESS.2021.3059978

CrossRef Full Text | Google Scholar

Chan, W. P., Hanks, G., Sakr, M., Zhang, H., Zuo, T., Van Der Loos, H. F. M., et al. (2022). Design and evaluation of an augmented reality head-mounted display interface for human robot teams collaborating in physically shared manufacturing tasks. ACM Trans. Human-Robot Interact. 11 (3), 1–19. doi:10.1145/3524082

CrossRef Full Text | Google Scholar

Chen, W., Shan, Y., Wu, Y., Yan, Z., and Li, X. (2021). Design and evaluation of a distance-driven user interface for asynchronous collaborative exhibit browsing in an augmented reality museum. IEEE Access 9, 73948–73962. doi:10.1109/ACCESS.2021.3080286

CrossRef Full Text | Google Scholar

Chen, K., Nadirsha, T. N. M., Lilith, N., Alam, S., and Svensson, Å. (2024). Tangible digital twin with shared visualization for collaborative air traffic management operations. Transp. Res. Part C Emerg. Technol. 161, 104546. doi:10.1016/j.trc.2024.104546

CrossRef Full Text | Google Scholar

Chu, C.-H., and Liu, Y.-L. (2023). Augmented reality user interface design and experimental evaluation for human-robot collaborative assembly. J. Manuf. Syst. 68, 313–324. doi:10.1016/j.jmsy.2023.04.007

CrossRef Full Text | Google Scholar

Cidota, M. A., Lukosch, S. G., Dezentje, P., Bank, P. J. M., Lukosch, H. K., and Clifford, R. M. S. (2016). Serious gaming in augmented reality using HMDs for assessment of upper extremity motor dysfunctions: user studies for engagement and usability. I-Com 15 (2), 155–169. doi:10.1515/icom-2016-0020

CrossRef Full Text | Google Scholar

Condino, S., Turini, G., Parchi, P. D., Viglialoro, R. M., Piolanti, N., Gesi, M., et al. (2018). How to build a patient-specific hybrid simulator for orthopaedic open surgery: benefits and limits of mixed-reality using the microsoft HoloLens. J. Healthc. Eng. 2018, 1–12. doi:10.1155/2018/5435097

PubMed Abstract | CrossRef Full Text | Google Scholar

Cui, J., Lou, R., Mantelet, F., and Segonds, F. (2024). Integration of additive manufacturing and augmented reality in early design phases: a way to foster remote creativity. Int. J. Interact. Des. Manuf. (IJIDeM) 18 (2), 609–625. doi:10.1007/s12008-023-01629-6

CrossRef Full Text | Google Scholar

De Marsico, M., Levialdi, S., Nappi, M., and Ricciardi, S. (2014). FIGI: floating interface for gesture-based interaction. J. Ambient Intell. Humaniz. Comput. 5 (4), 511–524. doi:10.1007/s12652-012-0160-9

CrossRef Full Text | Google Scholar

De Paolis, L. T., Vite, S. T., Castañeda, M. Á. P., Domínguez Velasco, C. F., Muscatello, S., and Hernández Valencia, A. F. (2022). An augmented reality platform with hand gestures-based navigation for applications in image-guided surgery: prospective concept evaluation by surgeons. Int. J. Human–Computer Interact. 38 (2), 131–143. doi:10.1080/10447318.2021.1926116

CrossRef Full Text | Google Scholar

Dhir, A., and Al-kahtani, M. (2013). A case study on user experience (UX) evaluation of Mobile augmented reality prototypes.

Google Scholar

Dünser, A., and Billinghurst, M. (2011). “Evaluating augmented reality systems,” in Handbook of augmented reality. Editor B. Furht (New York: Springer), 289–307. doi:10.1007/978-1-4614-0064-6_13

CrossRef Full Text | Google Scholar

Feng, S., He, X., He, W., and Billinghurst, M. (2023). Can you hear it? Stereo sound-assisted guidance in augmented reality assembly. Virtual Real. 27 (2), 591–601. doi:10.1007/s10055-022-00680-0

CrossRef Full Text | Google Scholar

Frank, J. A., Moorhead, M., and Kapila, V. (2017). Mobile mixed-reality interfaces that enhance human–robot interaction in shared spaces. Front. Robotics AI 4, 20. doi:10.3389/frobt.2017.00020

CrossRef Full Text | Google Scholar

Gan, Q., Liu, Z., Liu, T., Zhao, Y., and Chai, Y. (2023). Design and user experience analysis of AR intelligent virtual agents on smartphones. Cognitive Syst. Res. 78, 33–47. doi:10.1016/j.cogsys.2022.11.007

CrossRef Full Text | Google Scholar

García-Pereira, I., Portalés, C., Gimeno, J., and Casas, S. (2020). A collaborative augmented reality annotation tool for the inspection of prefabricated buildings. Multimedia Tools Appl. 79 (9–10), 6483–6501. doi:10.1007/s11042-019-08419-x

CrossRef Full Text | Google Scholar

Grodotzki, J., Müller, B. T., and Tekkaya, A. E. (2023). Enhancing manufacturing education based on controller-free augmented reality learning. Manuf. Lett. 35, 1246–1254. doi:10.1016/j.mfglet.2023.08.068

CrossRef Full Text | Google Scholar

Gutierrez, L. E., Betts, M. M., Wightman, P., Salazar, A., Jabba, D., and Nieto, W. (2022). Characterization of quality attributes to evaluate the user experience in augmented reality. IEEE Access 10, 112639–112656. doi:10.1109/ACCESS.2022.3216860

CrossRef Full Text | Google Scholar

Ha, T., and Woo, W. (2010). “An empirical evaluation of virtual hand techniques for 3D object manipulation in a tangible augmented reality environment,” in 2010 IEEE symposium on 3D user interfaces (3DUI), 91–98. doi:10.1109/3DUI.2010.5444713

CrossRef Full Text | Google Scholar

Haddaway, N. R., Page, M. J., Pritchard, C. C., and McGuinness, L. A. (2022). PRISMA2020: an R package and shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and open synthesis. Campbell Syst. Rev. 18 (2), e1230. doi:10.1002/cl2.1230

PubMed Abstract | CrossRef Full Text | Google Scholar

Han, V. Y., Cho, H., Maeda, K., Ion, A., and Lindlbauer, D. (2023). BlendMR: a computational method to create ambient mixed reality interfaces. Proc. ACM Human-Computer Interact. 7 (ISS), 217–241. doi:10.1145/3626472

CrossRef Full Text | Google Scholar

Hassenzahl, M. (2010). Experience design: technology for all the right reasons. Springer International Publishing. doi:10.1007/978-3-031-02191-6

CrossRef Full Text | Google Scholar

Helin, K., Kuula, T., Vizzi, C., Karjalainen, J., and Vovk, A. (2018). User experience of augmented reality system for astronaut’s manual work support. Front. Robotics AI 5, 106. doi:10.3389/frobt.2018.00106

PubMed Abstract | CrossRef Full Text | Google Scholar

Hietanen, A., Pieters, R., Lanz, M., Latokartano, J., and Kämäräinen, J.-K. (2020). AR-based interaction for human-robot collaborative manufacturing. Robotics Computer-Integrated Manuf. 63, 101891. doi:10.1016/j.rcim.2019.101891

CrossRef Full Text | Google Scholar

Hussain, M., Park, J., and Kim, H. K. (2023). Effects of interaction method, size, and distance to object on augmented reality interfaces. Interact. Comput. 35 (1), 1–11. doi:10.1093/iwc/iwad034

CrossRef Full Text | Google Scholar

Ismael, M., McCall, R., McGee, F., Belkacem, I., Stefas, M., Baixauli, J., et al. (2024). Acceptance of augmented reality for laboratory safety training: methodology and an evaluation study. Front. Virtual Real. 5, 1322543. doi:10.3389/frvir.2024.1322543

CrossRef Full Text | Google Scholar

Jiang, Q., Chen, J., Wu, Y., Gu, C., and Sun, J. (2022). A study of factors influencing the continuance intention to the usage of augmented reality in museums. Systems 10 (3), 73. doi:10.3390/systems10030073

CrossRef Full Text | Google Scholar

Jiboku, F., and Obarayi, Z. (2023). User experience and interaction design in augmented reality.

Google Scholar

Jin, Y., Ma, M., and Zhu, Y. (2022). A comparison of natural user interface and graphical user interface for narrative in HMD-Based augmented reality. Multimedia Tools Appl. 81 (4), 5795–5826. doi:10.1007/s11042-021-11723-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Jin, Y., Ma, M., and Yun, L. (2023). A comparative study of HMD-Based virtual and augmented realities for immersive museums: user acceptance, medium and learning. J. Comput. Cult. Herit., 3627164. doi:10.1145/3627164

CrossRef Full Text | Google Scholar

Joseph Dube, T., and İnce, G. (2019). A novel interface for generating choreography based on augmented reality. Int. J. Human-Computer Stud. 132, 12–24. doi:10.1016/j.ijhcs.2019.07.005

CrossRef Full Text | Google Scholar

Katić, D., Wekerle, A.-L., Görtler, J., Spengler, P., Bodenstedt, S., Röhl, S., et al. (2013). Context-aware augmented reality in laparoscopic surgery. Comput. Med. Imaging Graph. 37 (2), 174–182. doi:10.1016/j.compmedimag.2013.03.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Khurana, A., Glueck, M., and Chilana, P. K. (2023). Do I just tap my headset? how novice users discover gestural interactions with consumer augmented reality applications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7 (4), 1–28. doi:10.1145/3631451

CrossRef Full Text | Google Scholar

Kim, M., and Lee, J. Y. (2016). Touch and hand gesture-based interactions for directly manipulating 3D virtual objects in Mobile augmented reality. Multimedia Tools Appl. 75 (23), 16529–16550. doi:10.1007/s11042-016-3355-9

CrossRef Full Text | Google Scholar

Kim, J. C., Saguna, S., and Åhlund, C. (2023). Acceptability of a health care app with 3 user interfaces for older adults and their caregivers: design and evaluation study. JMIR Hum. Factors 10, e42145. doi:10.2196/42145

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, J. C., Saguna, S., and Åhlund, C. (2024). The effects of augmented reality companion on user engagement in energy management Mobile app. Appl. Sci. 14 (7), 2671. doi:10.3390/app14072671

CrossRef Full Text | Google Scholar

Kiourexidou, M., Kanavos, A., Klouvidaki, M., and Antonopoulos, N. (2024). Exploring the role of user experience and interface design communication in augmented reality for education. Multimodal Technol. Interact. 8 (6), 43. doi:10.3390/mti8060043

CrossRef Full Text | Google Scholar

Knierim, P., Hein, D., Schmidt, A., and Kosch, T. (2021). The SmARtphone controller: leveraging smartphones as input and output modality for improved interaction within mobile augmented reality environments. I-Com 20 (1), 49–61. doi:10.1515/icom-2021-0003

CrossRef Full Text | Google Scholar

Kolla, S. S. V., and Plapper, P. (2023). “Interaction modalities for augmented reality applications in manufacturing,” in Industrial engineering and applications. Amsterdam, The Netherlands: IOS Press/Sage Publishing, 379–390. doi:10.3233/ATDE230063

CrossRef Full Text | Google Scholar

Koo, S., Kim, J., Kim, C., Kim, J., and Cha, H. S. (2020). Development of an augmented reality tour guide for a cultural heritage site. J. Comput. Cult. Herit. 12 (4), 1–24. doi:10.1145/3317552

CrossRef Full Text | Google Scholar

Kyaw, N., Gu, M., Croft, E., and Cosgun, A. (2023). Comparing usability of augmented reality and virtual reality for creating virtual bounding boxes of real objects. Appl. Sci. 13 (21), 11693. doi:10.3390/app132111693

CrossRef Full Text | Google Scholar

Laine, T. H., and Suk, H. J. (2016). Designing mobile augmented reality exergames. Games Cult. 11 (5), 548–580. doi:10.1177/1555412015572006

CrossRef Full Text | Google Scholar

Lam, M. C., Suwadi, N. A., Mohd Zainul Arifien, A. H., Poh, B. K., Safii, N. S., and Wong, J. E. (2021). An evaluation of a virtual atlas of portion sizes (VAPS) Mobile augmented reality for portion size estimation. Virtual Real. 25 (3), 695–707. doi:10.1007/s10055-020-00484-0

CrossRef Full Text | Google Scholar

Li, J. (2024). Beyond sight: enhancing augmented reality interactivity with audio-based and non-visual interfaces. Appl. Sci. 14 (11), 4881. doi:10.3390/app14114881

CrossRef Full Text | Google Scholar

Li, W.-C., Zhang, J., Court, S., Kearney, P., and Braithwaite, G. (2022). The influence of augmented reality interaction design on Pilot’s perceived workload and situation awareness. Int. J. Industrial Ergonomics 92, 103382. doi:10.1016/j.ergon.2022.103382

CrossRef Full Text | Google Scholar

Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., et al. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 339, b2700. doi:10.1136/bmj.b2700

PubMed Abstract | CrossRef Full Text | Google Scholar

Lima, I. B., and Hwang, W. (2024). Effects of heuristic type, user interaction level, and evaluator’s characteristics on usability metrics of augmented reality (AR) user interfaces. Int. J. Human–Computer Interact. 40 (10), 2604–2621. doi:10.1080/10447318.2022.2163769

CrossRef Full Text | Google Scholar

Luo, Y., Liu, F., She, Y., and Yang, B. (2023). A context-aware Mobile augmented reality pet interaction model to enhance user experience. Comput. Animat. Virtual Worlds 34 (1), e2123. doi:10.1002/cav.2123

CrossRef Full Text | Google Scholar

Macedo, D. V. D., Rodrigues, M. A. F., Furtado, J. J. V. P., Furtado, E. S., and Chagas, D. A. (2014). Using and evaluating augmented reality for mobile data visualization in real estate classified ads. Int. J. Comput. Appl. 36 (1). doi:10.2316/Journal.202.2014.1.202-3737

CrossRef Full Text | Google Scholar

Madeira, T., Marques, B., Neves, P., Dias, P., and Santos, B. S. (2022). Comparing desktop vs. Mobile interaction for the creation of pervasive augmented reality experiences. J. Imaging 8 (3), 79. doi:10.3390/jimaging8030079

PubMed Abstract | CrossRef Full Text | Google Scholar

Mahroo, A., Greci, L., Mondellini, M., and Sacco, M. (2023). Assessment of a mixed reality smart home controller: holohome pilot study on healthy adults. Virtual Real. 27 (3), 2673–2690. doi:10.1007/s10055-023-00834-8

CrossRef Full Text | Google Scholar

Matviienko, A., Günther, S., Ritzenhofen, S., and Mühlhäuser, M. (2022). AR sightseeing: comparing information placements at outdoor historical heritage sites using augmented reality. Proc. ACM Human-Computer Interact. 6 (MHCI), 1–17. doi:10.1145/3546729

CrossRef Full Text | Google Scholar

Mendoza, S., Cortés-Dávalos, A., Sánchez-Adame, L. M., and Decouchant, D. (2021). An architecture for collaborative terrain sketching with Mobile devices. Sensors 21 (23), 7881. doi:10.3390/s21237881

PubMed Abstract | CrossRef Full Text | Google Scholar

Miranda, B. P., Queiroz, V. F., Araújo, T. D. O., Santos, C. G. R., and Meiguins, B. S. (2022). A low-cost multi-user augmented reality application for data visualization. Multimedia Tools Appl. 81 (11), 14773–14801. doi:10.1007/s11042-021-11141-2

CrossRef Full Text | Google Scholar

Mittmann, G., Barnard, A., Krammer, I., Martins, D., and Dias, J. (2022). LINA - a social augmented reality game around mental health, supporting real-world connection and sense of belonging for early adolescents. Proc. ACM Human-Computer Interact. 6 (CHI PLAY), 1–21. doi:10.1145/3549505

CrossRef Full Text | Google Scholar

Miyazaki, M., and Komuro, T. (2021). AR peephole interface: extending the workspace of a Mobile device using real-space information. Pervasive Mob. Comput. 78, 101489. doi:10.1016/j.pmcj.2021.101489

CrossRef Full Text | Google Scholar

Moglia, A., Marsilio, L., Rossi, M., Pinelli, M., Lettieri, E., Mainardi, L., et al. (2024). Mixed reality and artificial intelligence: a holistic approach to multimodal visualization and extended interaction in knee osteotomy. IEEE J. Transl. Eng. Health Med. 12, 279–290. doi:10.1109/JTEHM.2023.3335608

PubMed Abstract | CrossRef Full Text | Google Scholar

Neamțu, C., Comes, R., Popovici, D.-M., Băutu, E., Liliana, M.-S., Syrotnik, A., et al. (2024). Evaluating user experience in the context of cultural heritage dissemination using extended reality: a case study of the dacian bronze matrix with hollow design. J. Comput. Cult. Herit. 17 (2), 1–21. doi:10.1145/3639933

CrossRef Full Text | Google Scholar

Negrão, M. D., and Maciel, A. (2024). Characterizing head-gaze and hand affordances using AR for laparoscopy. Comput. and Graph. 121, 103936. doi:10.1016/j.cag.2024.103936

CrossRef Full Text | Google Scholar

Norman, D., and Neilsen, J. (2012). Usability 101: introduction to usability. Dover, DE: Nielsen Norman Group. Available online at: https://www.nngroup.com/articles/usability-101-introduction-to-usability/. (Accessed April 11, 2025)

Google Scholar

Norman, D., and Nielson, J. (1998). The definition of user experience (UX). Dover, DE: Nielsen Norman Group. Available online at: https://www.nngroup.com/articles/definition-user-experience/. (Accessed April 11, 2025)

Google Scholar

Oh, S., So, H.-J., and Gaydos, M. (2018). Hybrid augmented reality for participatory learning: the hidden efficacy of multi-user game-based simulation. IEEE Trans. Learn. Technol. 11 (1), 115–127. doi:10.1109/TLT.2017.2750673

CrossRef Full Text | Google Scholar

Oh, J. Y., Park, J. H., and Park, J.-M. (2019). Virtual object manipulation by combining touch and head interactions for Mobile augmented reality. Appl. Sci. 9 (14), 2933. doi:10.3390/app9142933

CrossRef Full Text | Google Scholar

Ong, S. K., Wang, X., and Nee, A. Y. C. (2020). 3D bare-hand interactions enabling ubiquitous interactions with smart objects. Adv. Manuf. 8 (2), 133–143. doi:10.1007/s40436-020-00295-1

CrossRef Full Text | Google Scholar

Oren, T. (1990). “Designing a new medium,” in The ARt of human-computer interface design (Addison-Wesley), 467–479.

Google Scholar

Özacar, K., Hincapié-Ramos, J. D., Takashima, K., and Kitamura, Y. (2016). 3D selection techniques for Mobile augmented reality head-mounted displays. Interact. Comput. doi:10.1093/iwc/iww035

CrossRef Full Text | Google Scholar

Park, H., and Moon, H.-C. (2013). Design evaluation of information appliances using augmented reality-based tangible interaction. Comput. Industry 64 (7), 854–868. doi:10.1016/j.compind.2013.05.006

CrossRef Full Text | Google Scholar

Pfeuffer, K., Abdrabou, Y., Esteves, A., Rivu, R., Abdelrahman, Y., Meitner, S., et al. (2021). ARtention: a design space for gaze-adaptive user interfaces in augmented reality. Comput. and Graph. 95, 1–12. doi:10.1016/j.cag.2021.01.001

CrossRef Full Text | Google Scholar

Picardi, A., and Caruso, G. (2024). User-centered evaluation framework to support the interaction design for augmented reality applications. Multimodal Technol. Interact. 8 (5), 41. doi:10.3390/mti8050041

CrossRef Full Text | Google Scholar

Polvi, J., Taketomi, T., Moteki, A., Yoshitake, T., Fukuoka, T., Yamamoto, G., et al. (2018). Handheld guides in inspection tasks: augmented reality versus picture. IEEE Trans. Vis. Comput. Graph. 24 (7), 2118–2128. doi:10.1109/TVCG.2017.2709746

PubMed Abstract | CrossRef Full Text | Google Scholar

Rebollo, C., Remolar, I., Rossano, V., and Lanzilotti, R. (2022). Multimedia augmented reality game for learning math. Multimedia Tools Appl. 81 (11), 14851–14868. doi:10.1007/s11042-021-10821-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Ren, Y., Zhang, Y., Liu, Z., and Xie, N. (2024). Eye-hand typing: eye gaze assisted finger typing via Bayesian processes in AR. IEEE Trans. Vis. Comput. Graph. 30 (5), 2496–2506. doi:10.1109/TVCG.2024.3372106

PubMed Abstract | CrossRef Full Text | Google Scholar

Ro, H., Byun, J.-H., Park, Y. J., Lee, N. K., and Han, T.-D. (2019). AR pointer: advanced ray-casting interface using laser pointer metaphor for object manipulation in 3D augmented reality environment. Appl. Sci. 9 (15), 3078. doi:10.3390/app9153078

CrossRef Full Text | Google Scholar

Rossano, V., Lanzilotti, R., Cazzolla, A., and Roselli, T. (2020). Augmented reality to support geometry learning. IEEE Access 8, 107772–107780. doi:10.1109/ACCESS.2020.3000990

CrossRef Full Text | Google Scholar

Sanna, A., Manuri, F., Fiorenza, J., and De Pace, F. (2022). BARI: an affordable brain-augmented reality interface to support human–robot collaboration in assembly tasks. Information 13 (10), 460. doi:10.3390/info13100460

CrossRef Full Text | Google Scholar

Sekhavat, Y. A. (2016). KioskAR: an augmented reality game as a new business model to present artworks. Int. J. Comput. Games Technol. 2016, 1–12. doi:10.1155/2016/7690754

CrossRef Full Text | Google Scholar

Shafana, A. R. F., and Silpasuwanchai, C. (2023). Investigating the role of gesture modalities and screen size in an AR 3D game. Multimedia Tools Appl. 83 (6), 18169–18184. doi:10.1007/s11042-023-16052-y

CrossRef Full Text | Google Scholar

Shen, X., Yan, Y., Yu, C., and Shi, Y. (2022). ClenchClick: hands-free target selection method leveraging teeth-clench for augmented reality. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6 (3), 1–26. doi:10.1145/3550327

CrossRef Full Text | Google Scholar

Shi, R., Wei, Y., Qin, X., Hui, P., and Liang, H.-N. (2023). Exploring gaze-assisted and hand-based region selection in augmented reality. Proc. ACM Human-Computer Interact. 7 (ETRA), 1–19. doi:10.1145/3591129

CrossRef Full Text | Google Scholar

Shih, N.-J., Diao, P.-H., and Chen, Y. (2019). ARTS, an AR tourism system, for the integration of 3D scanning and smartphone AR in cultural heritage tourism and pedagogy. Sensors 19 (17), 3725. doi:10.3390/s19173725

PubMed Abstract | CrossRef Full Text | Google Scholar

Silva, R., Jesus, R., and Jorge, P. (2023). Development and evaluation of a Mobile application with augmented reality for guiding visitors on hiking trails. Multimodal Technol. Interact. 7 (6), 58. doi:10.3390/mti7060058

CrossRef Full Text | Google Scholar

Sorrentino, F., and Spano, L. D. (2019). Post-it notes: supporting teachers in authoring vocabulary game contents. Multimedia Tools Appl. 78 (16), 23049–23074. doi:10.1007/s11042-019-7604-6

CrossRef Full Text | Google Scholar

Sprute, D., Tönnies, K., and König, M. (2019). A study on different user interfaces for teaching virtual borders to Mobile robots. Int. J. Soc. Robotics 11 (3), 373–388. doi:10.1007/s12369-018-0506-3

CrossRef Full Text | Google Scholar

Stanney, K. M., Hughes, C., Nye, H., Cross, E., Boger, C. J., and Deming, S. (2024). “Interaction design for augmented, virtual, and extended reality environments,” in Interaction techniques and technologies in human-computer interaction. 1st Edn, August 29, 2024. Editor Stephanidis, C., Salvendy, G. (Boca Raton: Imprint CRC Press).

Google Scholar

Sun, W., Huang, M., Wu, C., Yang, R., Yue, Y., and Jiang, M. (2024). Tangible and mid-air interactions in hand-held augmented reality for upper limb rehabilitation: an evaluation of user experience and motor performance. Int. J. Human–Computer Interact. 40, 6722–6739. doi:10.1080/10447318.2024.2342089

CrossRef Full Text | Google Scholar

Swaminathan, R., Schleicher, R., Burkard, S., Agurto, R., and Koleczko, S. (2013). Happy measure: augmented reality for Mobile virtual furnishing. Int. J. Mob. Hum. Comput. Interact. 5 (1), 16–44. doi:10.4018/jmhci.2013010102

CrossRef Full Text | Google Scholar

Tang, X., Li, R., and Fu, C.-W. (2022). CAFI-AR: contact-Aware freehand interaction with AR objects. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6 (4), 1–23. doi:10.1145/3569499

CrossRef Full Text | Google Scholar

Tesařová, A., Herout, A., Bambušek, D., and Juřík, V. (2023). How to shoot yourself right with a smartphone? Virtual Real. 27 (3), 2357–2369. doi:10.1007/s10055-023-00812-0

CrossRef Full Text | Google Scholar

Vargas González, A. N., Williamson, B., and LaViola, J. J. (2023). Authoring moving parts of objects in AR, VR and the desktop. Multimodal Technol. Interact. 7 (12), 117. doi:10.3390/mti7120117

CrossRef Full Text | Google Scholar

Villanueva, A., Zhu, Z., Liu, Z., Wang, F., Chidambaram, S., and Ramani, K. (2022). ColabAR: a toolkit for remote collaboration in tangible augmented reality laboratories. Proc. ACM Human-Computer Interact. 6 (CSCW1), 1–22. doi:10.1145/3512928

CrossRef Full Text | Google Scholar

Widiyanti, D. E., Asmoro, K., and Shin, S. Y. (2024). HoloGCS: mixed reality-based ground control station for unmanned aerial vehicle. Virtual Real. 28 (1), 40. doi:10.1007/s10055-023-00914-9

CrossRef Full Text | Google Scholar

Wild, F., Marshall, L., Bernard, J., White, E., and Twycross, J. (2021). UNBODY: a poetry escape room in augmented reality. Information 12 (8), 295. doi:10.3390/info12080295

CrossRef Full Text | Google Scholar

Xue, Z., Yang, J., Chen, R., He, Q., Li, Q., and Mei, X. (2024). AR-Assisted guidance for assembly and maintenance of avionics equipment. Appl. Sci. 14 (3), 1137. doi:10.3390/app14031137

CrossRef Full Text | Google Scholar

Yi, J. H., and Kim, H. S. (2021). User experience research, experience design, and evaluation methods for museum mixed reality experience. J. Comput. Cult. Herit. 14 (4), 1–28. doi:10.1145/3462645

CrossRef Full Text | Google Scholar

Yin, J., Fu, C., Zhang, X., and Liu, T. (2019). Precise target selection techniques in handheld augmented reality interfaces. IEEE Access 7, 17663–17674. doi:10.1109/ACCESS.2019.2895219

CrossRef Full Text | Google Scholar

Yoon, J. W., Chen, R. E., Kim, E. J., Akinduro, O. O., Kerezoudis, P., Han, P. K., et al. (2018). Augmented reality for the surgeon: systematic review. Int. J. Med. Robotics Comput. Assisted Surg. 14 (4), e1914. doi:10.1002/rcs.1914

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Z., Pan, Z., Li, W., and Su, Z. (2023). X-Board: an egocentric adaptive AR assistant for perception in indoor environments. Virtual Real. 27 (2), 1327–1343. doi:10.1007/s10055-022-00742-3

CrossRef Full Text | Google Scholar

Zhang J, J., Chen, T., Gong, W., Liu, J., and Chen, J. (2024). Exploring data input problems in mixed reality environments: proposal and evaluation of natural interaction techniques. Future Internet 16 (5), 150. doi:10.3390/fi16050150

CrossRef Full Text | Google Scholar

Zhang X, X., He, W., Billinghurst, M., Yang, L., Feng, S., and Liu, D. (2024). Design and evaluation of bare-hand interaction for precise manipulation of distant objects in AR. Int. J. Human–Computer Interact. 40 (9), 2282–2296. doi:10.1080/10447318.2022.2158527

CrossRef Full Text | Google Scholar

Zhao, S., Ni, Y., Dong, G., Tian, J., and Chen, Y. (2023). Comparing threeXRtechnologies in reviewing performance-based building design: a pilot study of façade fenestrations. Comput. Animat. Virtual Worlds 34 (6), e2139. doi:10.1002/cav.2139

CrossRef Full Text | Google Scholar

Zhou, Q., Syiem, B. V., Li, B., Goncalves, J., and Velloso, E. (2023). Reflected reality: augmented reality through the mirror. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7 (4), 1–28. doi:10.1145/3631431

CrossRef Full Text | Google Scholar

Keywords: augmented reality, interaction design, user experience, user interface, natural interaction

Citation: Hughes CL and Karwowski W (2025) Evaluating interaction design and user experience in augmented reality: a systematic review. Front. Virtual Real. 6:1710161. doi: 10.3389/frvir.2025.1710161

Received: 21 September 2025; Accepted: 19 November 2025;
Published: 03 December 2025.

Edited by:

Serban Georgica Obreja, Polytechnic University of Bucharest, Romania

Reviewed by:

Ana Neacsu, Polytechnic University of Bucharest, Romania
Radu-Ovidiu Preda, Polytechnic University of Bucharest, Romania

Copyright © 2025 Hughes and Karwowski. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Claire L. Hughes, Q2xhaXJlX2h1Z2hlc0B1Y2YuZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.