- 1Human-Computer Interaction, Institute for Computer Science, University of Würzburg, Würzburg, Germany
- 2Institute of Medical Teaching and Medical Education Research, University Hospital Würzburg, Würzburg, Germany
The use of VR for educational purposes provides the opportunity for integrating VR applications into assessments or graded examinations. Interacting with a VR environment requires specific human abilities, thus suggesting the existence of a VR competence. With regard to the emerging field of VR-based examinations, this VR competence might influence a candidate’s final grade and hence should be taken into account. In this paper, we proposed and developed a VR competence assessment application. The application features eight individual challenges that are based on generic 3D interaction techniques. In a pilot study, we measured the performance of 18 users. By identifying significant correlations between VR competence score, previous VR experience and theoretically-grounded contributing human abilities and characteristics, we provide first evidence that our VR competence assessment is effective. In addition, we provide first data that a specific VR competence exists. Our analyses further revealed that mainly spatial ability but also immersive tendency correlated with VR competence scores. These insights not only allow educators and researchers to assess and potentially equalize the VR competence level of their subjects, but also help designers to provide effective tutorials for first-time VR users.
1 Introduction
Immersive Virtual Reality (VR) provides several benefits for learning and training of new knowledge and skills. It can increase task performance (Stevens and Kincaid, 2015), cause higher learning motivation (Makransky and Lilleholt, 2018), allow for visualization as well as analysis of complex learning contents (Dede, 2009), and achieve implicit learning by providing a direct and explicit audiovisual demonstration of the application of the learning content (Freina and Ott, 2015; Slater et al., 2017). VR can assist the learning of 3D geometry (Oberdörfer and Latoschik, 2019), history (Fernes et al., 2023), training of medical emergency procedures (Mühling et al., 2023) as well as classroom management competency (Seufert et al., 2022). Besides enabling learning and training, VR could further facilitate the assessment of a learner’s performance. This is particularly relevant in the field of medical education, where practical examinations are known for their high demands on personnel and resources (Barman, 2005). Hence, realistic and easily repeatable VR-based scenarios are increasingly used to assess medical competencies across various specialties (Neher et al., 2025).
Despite significant technological advancements, VR fundamentally remains a mediated experience. This is primarily achieved through the use of Head-Mounted Displays (HMDs) combined with input devices such as game controllers or full-body tracking systems. VR environments offer interaction paradigms not feasible in the physical world, exemplified by features like teleportation. Furthermore, VR frequently employs feedback substitutions to compensate for absent sensory information, such as visually highlighting an object upon touch to indicate graspability (LaViola, 2017). Consequently, even with high degrees of realism, users must navigate a certain level of abstraction to effectively utilize VR systems. This process necessitates specific human abilities, which culminate in what we term VR Competence. This competence, which can vary in its level of development among individuals, extends beyond mere operational proficiency with input/output devices. It critically involves the capacity to comprehend as well as interpret information conveyed through and the correct execution of VR-specific interaction metaphors, such as laser-pointer selection metaphors and feedback substitutions. When considering VR as an examination platform, the equitable assessment of knowledge is paramount. Therefore, minimizing individual disparities in VR competence is crucial to ensure a fair evaluation for every candidate, preventing the assessment from being unduly influenced by variations in VR interaction proficiency.
To effectively account for VR competence in assessment scenarios, it is crucial to first identify its underlying human abilities. We propose a VR system designed to challenge users with a sequence of short levels, each targeting fundamental 3D interaction techniques. Specifically, each level focuses on a distinct metaphor related to either selection and manipulation or travel interaction. By measuring individual performance across these levels, we aim to quantify a user’s VR competence. VR competence is a subject’s proficiency with VR input/output devices and the capacity of executing interaction metaphors as well as comprehending the information conveyed through them. Furthermore, we intend to assess various human abilities theoretically contributing to VR competence. By analyzing the correlations between these measured abilities and the performance data from our VR levels, we anticipate identifying the core human abilities that constitute VR competence.
1.1 Contribution
We developed a VR Competence Assessment environment, built around generic 3D interaction techniques, which features eight individual challenges as displayed in Figure 1. Using this system, we measured the performance of 18 participants. To establish a basic validity of our assessment method, we hypothesized that greater prior VR experience would correlate with higher VR competence. Our analysis revealed a significant positive correlation between these two measures, supporting the foundational validity of our approach. Furthermore, we investigated the relationship between user performance and several self-reported characteristics, including presence, immersive tendency, self-efficacy, technology literacy, and spatial ability. Our findings indicate that certain abilities, such as spatial ability and immersive tendency, were strongly associated with VR competence. Conversely, other factors like presence and technology literacy showed no significant correlation. While these insights await confirmation in larger cohorts, they offer immediate practical implications. Educators can leverage these findings to assess and, ideally, equalize the VR competence of candidates undergoing VR-based examinations. Additionally, these insights can guide designers in creating more effective tutorials for first-time VR users, ultimately enhancing their initial VR experience.

Figure 1. VR competence assessment level overview; From left to right, top to bottom: Button press, teleport, select, rotate and translate, orientation, scale, raycast and touch.
2 Theoretical background
In recent years, numerous VR applications have been developed for training of technical and non-technical skills and partially integrated into medical curricula–ranging from brain death diagnostics (Junga et al., 2025) and skin cancer screenings (Mergen et al., 2024) to virtual autopsies (Klus et al., 2024) and the training of medical emergencies (Abbas et al., 2023). These immersive simulations offer realistic environments and allow learners to practice complex tasks that are difficult to replicate in traditional settings. Recent meta-analyses suggest that VR-based training may be at least as effective, if not superior, to traditional methods (Kim and Kim, 2023; Liu et al., 2023). Importantly, in some areas, more immersive VR trainings appear to be less conducive to learning than those with lower levels of immersion (Kim and Kim, 2023) – possibly due to increased cognitive load caused by the complexity of hardware and software controls (Han et al., 2021). Beyond training, VR is also being increasingly used in clinical assessments for undergraduate (Neher et al., 2025) and graduate (Keicher et al., 2024) medical learners, offering potential benefits in standardization, objectivity, and automation, despite current challenges regarding software maturity and implementation costs. In a recently conducted VR-based OSCE examination (Mühling et al., 2025), higher discrimination indices were observed compared to a content-equivalent physical examination. This may indicate that, in addition to medical performance, a latent construct such as VR competence may have influenced the assessment. To ensure fair exam conditions and avoid favoring participants with prior VR experience, the construct of VR competence should be further explored and considered in exam planning.
2.1 Challenges of interacting with VR environments
Immersion is defined as “the extent to which the computer displays are capable of delivering an inclusive, extensive, surrounding, and vivid illusion of reality to the senses of a human participant” (Slater and Wilbur, 1997). Immersion further encompasses the possible user actions within a given system (Slater, 2009) like grabbing and manipulating virtual objects. Compared to interacting with a virtual learning environment on a computer screen using mouse and keyboard, the egocentric visualization of and direct interaction with the virtual environment using VR not only leads to a high level of presence (Slater et al., 1996) but also facilitates the learning process for learners coming from less technologically advanced regions of the world (Makransky and Klingenberg, 2022). Presence describes the subjective acceptance of the virtual environment as one’s location (Slater, 2009) and hence indicates the realness of the virtual experience (Skarbez et al., 2017).
On a concrete level, interaction with an immersive virtual environment is realized by implementing 3D interaction techniques. 3D interaction techniques can belong to the overarching categories of selection, manipulation, navigation, and system control (LaViola, 2017). These well-researched techniques can be found in every 3D environment, independent of the display technology used. If no direct real-world mapping like real walking (Bruder and Steinicke, 2014) for navigation is possible, the interaction techniques can be implemented with metaphors (LaViola, 2017). An interaction metaphor represents an easily understandable substitute for a complex real world action like grasping for moving objects of any shape and weight. The design of these metaphors is influenced by both human factors, e.g., visual representation, ergonomics, cognitive load, and system factors, e.g., limitations of input devices or tracking space (LaViola, 2017).
2.1.1 Selection and manipulation
Selection and manipulation tasks involve first selecting and then modifying objects in virtual environments. Effective manipulation is important for many VR tasks, requiring the design of interfaces that enhance user performance and comfort (Foley et al., 1984; LaViola, 2017). Tasks are characterized by factors like object size, user distance or the user’s physical state (LaViola, 2017).
2.1.2 Navigation
Navigation in VR combines travel and wayfinding. In detail, travel is the locomotion component, which involves moving from one place to another, and wayfinding the cognitive component that involves route planning (LaViola, 2017). Effective navigation is crucial for usability, especially since virtual travel often supports other primary tasks like object interaction (LaViola, 2017). Travel can be further categorized into exploration, search, and maneuvering, each with unique requirements (LaViola, 2017). Wayfinding supports these tasks through cognitive aids like spatial understanding and mental maps (Golledge, 1999; LaViola, 2017).
2.1.3 System control
System control enables users to manage interactions within 3D environments, such as issuing commands and modifying system states. Unlike discrete control tasks like navigation, system control tasks often specify what should be done, leaving the system to define the details of how it is executed (LaViola, 2017). Interactions for system control are often realized with interaction metaphors belonging to the category of selection and manipulation and also can include symbolic input, i.e., the input of characters and numbers. Thus, system control provides no additional level of abstracted interactions and will not be further considered in our investigation of a VR competence.
The realization of selection and manipulation as well as navigation interaction techniques depends on specific factors. While selection and manipulation commonly is distinguished in range and representation (LaViola, 2017), travel includes the factors of range as well as destination, motion type, trigger, and representation (Oberdörfer et al., 2018). Hence, the individual realizations cause different levels of abstraction users need to overcome to successfully use a VR system. When aiming at the investigation of a general VR competence, these different levels of abstraction need to be respected.
2.2 Early approaches to VR competence assessment
Research on assessing VR user competences is limited, but notable exceptions include the Virtual Environment Performance Assessment Battery (VEPAB) by Lampton et al. (1994) and the Nottingham Assessment of Interaction in Virtual Environments (NAÏVE) by Griffiths et al. (2006).
Lampton et al. (1994) developed VEPAB to measure human performance in VR environments, particularly for military training. It assessed basic tasks like vision, locomotion, tracking, object manipulation, and reaction time to establish a baseline for VR performance. Through studies, they found that VEPAB could reliably measure VR performance, with significant improvements of participants over time (Lampton et al., 1994). Additionally, VEPAB was sensitive to differences in input devices.
Griffiths et al. (2006) developed NAÏVE to differentiate participant performance across various VR tasks, focusing on navigation, object interaction and the combination of both. Their goal was on the one hand to screen VR competence levels of study participants to assure about equal levels. On the other hand, they aimed to assess VR competence for training purposes, for example, to ensure a minimum skill level to profit from VR training. The tasks were integrated into a seamless experience, and the tool successfully classified participants into performance categories (Griffiths et al., 2006).
While VEPAB and NAÏVE differed in their approaches – VEPAB uses isolated tasks and NAÏVE integrates tasks into a longer experience – both tools included essential navigation and object manipulation tasks. However, some aspects of VEPAB, like vision and tracking tests, are outdated today. As VR technology has advanced significantly, there is a need for updated tools that reflect current research and technological possibilities. Hence, this work additionally aims to develop and evaluate a modern application to assess VR competence in line with today’s standards.
2.3 Personal characteristics and VR competence
Understanding how personal abilities and characteristics influence VR competence is important for later improving user performance by training these abilities and characteristics. Research already identified certain human abilities, factors, and skills that influence user performance in VR applications. Hence, we briefly inspect the connection between VR performance, human abilities, and characteristics such as spatial ability, self-efficacy, immersive tendency, technology literacy, presence, and what is known about their impact on VR performance.
2.3.1 Spatial ability
Spatial ability is all about effectively using spatial information and is crucial in fields like science, technology, engineering, and mathematics (STEM). In everyday life, it is, e.g., important for orienting oneself in the environment (Pellegrino et al., 1984). Studies show that higher spatial ability leads to faster and more accurate 3D object manipulation (Drey et al., 2023), and moderates performance with using 3D user interfaces when the interaction metaphor has a higher level of abstraction (Rizzo et al., 2005), while lower spatial abilities can put users at a disadvantage in VR environments (Gittinger and Wiesche, 2023).
2.3.2 Self-efficacy
Self-efficacy is the belief in one’s ability to use their skills to achieve desired goals (Maddux and Kleiman, 2016). While already discussed extensively in its influences on computer usage (Alharbi and Drew, 2019; Darsono, 2005; Lewis and AgarwalSambamurthy, 2003; Sharp, 2007; Teo, 2009; Xie et al., 2022), more recent studies also demonstrate similar effects with VR. Additionally, higher self-efficacy was found to enhance perceived ease of use (Hsu et al., 2022; Xie et al., 2022), intention to use a VR system (Hsu et al., 2022; Xie et al., 2022), and learning outcomes (Mousavi et al., 2023).
2.3.3 Immersive tendency
Immersive tendency is one’s ability to experience presence in VR (Witmer and Singer, 1998) and to become more involved with virtual experiences (Banerjee et al., 2002; Gonçalves et al., 2023). It is believed to influence how users focus on tasks and process information within virtual environments (Gonçalves et al., 2023). Although empirical evidence for its effect on learning and performance is mixed (Khashe et al.; Krassmann et al., 2020; Rose and Chen, 2018), immersive tendency remains an important aspect of VR interactions and thus may influence overall performance.
2.3.4 Technology literacy
Technology literacy, in the context of this work, is defined as “the ability of a person to use, manage, assess, and understand technology” (Dugger, 2001). Since the use of a technology is per definition part of technology literacy, it could be very relevant to a VR competence. Previous research has shown that technology literacy enhances performance in educational contexts (Ismail et al., 2024; Naz et al., 2022), but direct links to VR are sparse. Yet, higher technology literacy likely aids VR interaction, making it a relevant factor for this study.
2.3.5 Presence
Presence is an application- and technology-dependent experience and refers to the feeling of being in a virtual environment rather than the physical one (Witmer and Singer, 1998). It has a weak but consistent positive correlation with performance in VR (Nash et al., 2000; Witmer and Singer, 1998), particularly in tasks involving spatial perception and procedural skills (Stevens and Kincaid, 2015; Grassini et al., 2020; Maneuvrier et al., 2020).
2.4 Research gap
Taken together, all these factors can influence a user’s performance in completing tasks in a VR environment. However, according to our best knowledge, it is yet unclear to what extent each factor contributes to a user’s VR competence and, even more importantly, what factors form a VR competence. To close this research gap, we conducted a study assessing the individual personal characteristics and correlating them to our participants’ performance in correctly executing VR interaction techniques. That way, we cannot only determine the VR competence of a user but also use the competence as a human factor in assessments of a user’s overall performance, e.g., in an exam setting.
3 System design
The proposed VR application needs to challenge users with the execution of commonly used interaction metaphors varying in degree of abstraction and measure their performance to assess their potential VR competence. The measured performance subsequently can be correlated with the individual characteristics of each user to also identify the human abilities that contribute to the VR competence the most. Hence, the VR application shall 1) provide a sequence of challenges, of which each targets one interaction metaphor, and 2) measure a user performance in the execution of the interactions. We developed the application with “Unity” version 2022.3.23f1 using an “Oculus Quest 2” HMD with its game controllers. Several additional packages were utilized to aid the development. First, the “XR Interaction Toolkit” version 2.5.2 and the “Oculus XR Plugin” version 4.2.0. Next, several packages from “Tilia” were used: “Tilia CameraRigs TrackedAlias Unity” (v2.5.2), “Tilia CameraRigs XRPluginFramework Unity” (v2.1.11), “Tilia Indicators ObjectPointers Unity” (v2.2.10), “Tilia Input UnityInputSystem” (v2.4.8) and “Tilia Interactions Interactables Unity” (v2.16.6).
In total, our application consists of eight levels, each with a dedicated tutorial prior to the actual level. The tutorial guides the user through the task with detailed instructions, allowing them to try the interaction three times at their own speed, before advancing to the assessment level. Once in the assessment level, a timer of 1 minute is started upon clicking the start button. During this time, the user is asked to complete as many repetitions of the respective interaction as possible. As the level progresses, the difficulty increases, e.g., due to smaller target objects. To avoid a ceiling effect and to derive VR competence thresholds at a later stage, we made the decision to scale the levels in a way that it is impossible to complete them within 1 minute. We asked two colleagues with a very high gaming and VR experience to tackle the levels for initially balancing them. The application automatically logs the number of completed successful repetitions and calculates the percentage of completed executions with the unattainable maximum score of the level. The VR competence score results out of averaging the percentages of all levels. In the end, the logged data is saved as a. csv file for follow-up analyses.
3.1 Levels for navigation
Travel is a supporting interaction to enable users to perform their primary task and rarely the user’s predominant goal of an application (LaViola, 2017). This especially might be the case when intending to use a VR application for graded exams. Hence, we made the decision to only add a teleportation travel technique level to our application. Teleportation causes the least level of cybersickness in comparison to other artificial travel techniques (Weißker et al., 2018). Also, the current gold standard for realizing navigation is a combination of providing teleportation for travel over a greater distance and using real walking for navigating within close range. While teleportation causes a certain level of abstraction, real walking remains a natural locomotion technique.
When navigating through a virtual environment using artificial travel techniques like teleportation, users might experience a greater challenge to develop a spatial understanding for the layout of the virtual environment (Weißker et al., 2018). Therefore, we decided to further include an assessment for wayfinding skills.
As a result of this, our VR competence assessment application tests navigation in two separate levels, one for travel and one for wayfinding. The level for travel assesses the user’s teleportation skills. Randomly generated teleport platforms decrease in size and increase in distance as the user progresses. Also, the platforms randomly vary in their vertical position, thus challenging a user to either position the teleportation target on a higher or lower position. A curved ray is used to teleport between platforms, with feedback provided through visual cues and sound. Figure 2 depicts the teleportation task.

Figure 2. The travel level requires users to teleport from one small platform to the next, while the distance between the platforms increases and the size of the platforms decreases.
The wayfinding level tests the user’s orientation based on the approach described by Weißker et al. (Weißker et al., 2018). The user is placed in a city environment where they must navigate a path and estimate their starting point after taking two turns as displayed in Figure 1 first lower image. The city layout is randomly chosen out of ten previously generated maps. Using teleportation to travel, the users are asked to travel to a specific position and subsequently to point to their starting position with a ray. The scoring is based on the accuracy of their estimate in degrees, thus being the only level with a scoring not based on the number of completed executions.
3.2 Level for selection
We test the user’s ability to correctly select targets with three levels. Two levels challenge the user to correctly select targets at different distances. The third level requires the user to find and select a specific cube in a pile of other cubes.
The cube selection level challenges the user to identify and grab a target red cube from a pile of blue distractor cubes, giving appropriate feedback. The task becomes progressively harder as the red and blue cubes shrink in size. The task is displayed in Figure 1 third upper image.
In the raycast level, the user uses a virtual ray to aim at square buttons that randomly appear at a certain distance in the environment. After confirming the selections by pressing the trigger on the controller, auditory feedback is played and a new button appears. As the task progresses, the buttons decrease in size, increasing the difficulty and requiring greater precision for aiming. Figure 1 third lower image depicts the task.
The touch level is similar to the raycast one, with the difference that the buttons need to be touched directly with the virtual controller. Hence, this level tests selection at close range as displayed in Figure 1 fourth lower image. The task requires precision as new, over time smaller, buttons spawn at random locations after each successful touch.
3.3 Levels for object manipulation
Object manipulation is tested with two levels, the first one combining rotation and repositioning. It requires the user to grab a cube and shove it in a tube-like box, as depicted in Figure 3 and Figure 1 fourth upper image. With each cube placed into the box, the box rotates randomly and shrinks slightly, forcing the user to adjust the cubes’ rotation and position more accurately.

Figure 3. The rotation level requires users to grab a cube and shove it into a nearby box, adjusting the cube’s rotation and position.
In the scaling level, the user is tasked with scaling a cube to match a size between two reference cubes. The larger reference cube designates the upper size limit, the smaller reference cube represents the lower size limit. The user grabs the interactable cube with both hands and pulls them apart to scale it, with the cube’s color changing to green when the correct size is achieved. Figure 4 and Figure 1 second lower image depict the scaling interaction. The difficulty increases as the size difference between the reference cubes gradually shrinks, requiring the user to be more precise with their scaling.
3.4 Level for hardware handling
Since most VR applications are controlled with respective game controllers, we also added a hardware handling level to assess a user’s skill to press the correct buttons. This button press level displays one extra pair of the game controller’s 3D model and highlights specific buttons to be pressed as displayed in Figure 1 first upper image. Over time, the number of buttons to be pressed at the same time increase to make the task more complex. Users receive continuous visual feedback (green for correct button presses, red for incorrect) in conjunction with auditory feedback upon task completion.
4 Methodology
We conducted a user study to investigate whether 1) a specific VR competence can be measured with our VR application, and 2) the VR competence depends on specific human abilities and characteristics. In our study, the VR competence assessment application challenged the participants with the levels in the following order as displayed in Figure 1: Button press, teleportation, selection and grabbing, rotation, orientation, scale, raycast, and touch.
Based on our theoretical considerations in Section 2 and the design of our system described in Section 3, the following hypotheses were generated.
Under the assumption that greater experience with VR systems correlates with enhanced VR competence, we assessed participants’ prior VR exposure. This assessment included quantifying both their total hours of VR usage and the cumulative number of individual VR experiences. We hypothesized a higher VR competence score for users with a higher VR experience.
We further assessed individual abilities and characteristics to investigate their role with respect to a subject’s VR competence. As we did not find a clear trend for immersive tendency in our analysis of previous research in Section 2,
4.1 Measures
Besides automatically logging the performance of the participants during runtime of our VR competence assessment application, we administered several questionnaires to assess the participants’ individual abilities and characteristics. Also, we asked for some demographic data.
4.1.1 Spatial ability
To measure spatial ability, the 20-item Mental Rotation test from Vandenberg and Kuse (Vandenberg and Kuse, 1978) was used, in the redrawn version from Peters et al. (1995). In the test, participants are presented a 3D object made of cubes. Subsequently, they must select the identical, but rotated object from four options. The two incorrect options include either the mirrored version of the target object or completely different objects. An answer is scored as correct if both figures are recognized correctly, therefore a maximum score of 20 was possible. For instructing this task, the approach by Peters et al. (1995) was used. As it was important that the instructions and examples of this test are understood correctly, they were translated to German.
4.1.2 Self-efficacy
Self-efficacy was recorded with a modified version of the technology self-efficacy questionnaire (Holcomb et al., 2004), where “computer” was rewritten to “VR”. It was presented in the original language, English. On a scale of one to five, participants rate how strongly they agree with statements regarding their experience with VR. Low scores stand for low agreement with the statements.
4.1.3 Immersive tendency
Immersive tendency was measured with the corresponding questionnaire by Witmer and Singer (Witmer and Singer, 1998). The Immersive Tendency Questionnaire (ITQ) assesses a participant’s immersive tendency, their current alertness as well as fitness, and their ability to focus. It was administered in its original version in English.
4.1.4 Presence
We adapted the single-item Mid Immersion Presence Questionnaire (MIPQ) (Bouchard et al., 2004; Bouchard et al., 2008) to assess the experienced presence of our participants. The MIPQ consists of the orally presented question “To which extend do you feel present in the virtual environment, as if you were really there?“. Participants rate their current presence on a scale from 0 to 10. Higher scores indicate higher presence. We, however, administered the question after the end of the VR exposure as part of the post-questionnaire.
4.1.5 Technology literacy
Technology literacy was recorded with the fitting subscale from the technology affinity questionnaire by Karrer et al. (2009). Participants were asked to rate their agreement with statements about their attitudes and skills regarding electronic devices. It was rated on a scale of one to five, with low scores indicating low agreement. This questionnaire was shown in its original language, German.
4.1.6 Cybersickness
We measured cybersickness before and after the exposition to VR using the Simulator Sickness Questionnaire (SSQ) (Kennedy et al., 1993) to rule out the often problematic zero-baseline assumption (Brown et al., 2022). This is to ensure that the application does not trigger extensive simulator sickness that risks the wellbeing of the user and influences the VR competence score. The SSQ scales range from 0 to 3. The total score was calculated as described by Kennedy et al. (1993), where low scores indicate low sickness. The German translation of the items stems from Hösch (Hösch, 2018).
4.1.7 Usability
The usability of the application was assessed post immersion with the System Usability Scale (SUS) (Brooke, 1996) (German version by Rummel (Rummel, 2016)) to ensure that possible usability issues do not confound the VR competence score. For this purpose, participants rated their agreement with statements about the application from one to five. It was scored as described by Brooke (Brooke, 1996) with the best score possible being 100.
4.1.8 Demography
Participants were asked about their gender, age, nationality, education level, and current main occupation to better understand our sample. In order to ensure that participants experience the VR environment as intended, we also surveyed dexterity, possible visual and hearing impairments, as well as color blindness and language proficiency for English and German. Lastly, we asked participants about their technology usage to explore possible patterns in relation to our study measures. Those included VR experience in hours of use and number of expositions, video game play time per day, as well as internet, mobile phone and PC usage per day.
4.2 Apparatus
The study was conducted in a small lab where two workstations were placed, offering a final tracking space of about 3 × 4 meters. Lighting was controlled at all times with blinds to avoid issues with the tracking. The HMD used in the study was an Oculus Quest 2 with the accompanying controllers, no additional trackers were used. The HMD was connected with the PC via cable, utilizing the Meta Quest Link application version 68.0.0.515.361. The PC ran on Windows 10 Enterprise and had an Intel i9-13900K processor with the NVIDIA GeForce RTX 4080 graphics card and 64 Gigabyte RAM available. Just like in development, the VR competence assessment application ran on Unity 2022.3.23f1. The inter-pupillary distance of the HMD was set to the advised preset 2, which corresponds to 63 mm, suitable for most users.
4.3 Study procedure and piloting
After welcoming the participants, they were seated at a desk to complete the digital pre-questionnaire. It provided details on the study, like its duration as well as measured factors, and required informed consent. Participants were then given safety instructions for VR use, followed by an assessment of their spatial abilities and a pre-VR SSQ. As noted in Subsection 4.1, questionnaires were administered in either German or English, depending on which version was available.
Next, participants received a verbal explanation of the VR application’s purpose and structure, including a short explanation of the controllers. They then stood in a designated area, adjusted the HMD, and started the VR tasks described in Section 3.
Upon completing the VR levels, participants returned to the PC used for the survey to complete further questionnaires, including the SSQ, one-item presence questionnaire, SUS, ITQ, self-efficacy and technology literacy assessments. Finally, demographic data were collected, and participants were asked to confirm the correctness of their responses before being thanked for their participation.
This study procedure and the application itself were tested with three separate pilot studies. Through this feedback, structure, and clarity of the questionnaire were improved. Additionally, instructions for the VR levels and small bugs in the application were fixed beforehand.
4.4 Participants
The study was conducted as a lab study, recruiting participants via the institute’s recruitment platform. Participants received credits mandatory for obtaining their program of study’s final degree as compensation for their participation. A total of 18 participants were surveyed, of whom nine were male and nine were female. The age ranged from 20 to 28 years, with an average of 23.72 (SD = 2.42). Most participants were students (n = 16), with varying levels of VR experience. No participants reported color blindness or hearing impairments. However, nine participants presented with visual impairments. Specifically, five individuals wore glasses, three utilized contact lenses, and one had an uncorrected visual impairment. Despite this, the participant with uncorrected visual impairment was retained in the analysis after verbally confirming clear perception of the VR application. The remaining nine participants reported no visual impairments. All participants were native German speakers. Furthermore, 13 had been speaking English for over 10 years, while the remaining five had between five and 10 years of experience with the language.
5 Results
In order to sort out inattentive participants, two attention-checks were included in the questionnaire, which were passed by all participants. Additionally, every participant indicated that they answered conscientiously at the end of the questionnaire. That way, all 18 participants could be evaluated. Data preprocessing was performed in Excel, with analysis conducted in JASP. The VR competence scores were normalized prior to the analysis for comparability across levels. To test our hypotheses, we used Pearson correlations, assuming normally distributed data with no outliers. If assumptions of normality or linearity were violated, or outliers were present, we used Spearman’s rho (
5.1 Control variables
5.1.1 Presence
Participants reported a mean presence score of 7.28 (SD = 1.71), indicating a generally good experience of presence in the virtual environment. That way, the likelihood of negative effects on the data due to low presence is small.
5.1.2 Usability
The SUS yielded an average score of 81.5 (SD = 13.0) after reverse coding and scaling, which is considered good (Bangor et al., 2008). This indicates that the usability of the system did not significantly influence participant performance in the VR competence assessment.
5.1.3 Cybersickness
We calculated the simulator sickness scores as described by Kennedy et al. (1993). Four participants reported simulator sickness scores above 20 before the experimental trail. The change in symptoms between pre- and post-VR measures showed that five participants experienced no change, seven reported a decrease, and five had a slight increase, with a maximum of eight points. One participant had a significant increase of 45 points but was included in the analysis nevertheless, as they did not report any issues during or after VR use and were otherwise inconspicuous.
5.2 VR competence
Table 1, 2 give an overview of the participants’ performance across the eight levels of our VR application. In order to calculate the percentages, each level score was normalized to allow for comparison between the levels. As we only set the unattainable maximum scores of the levels using initial balancing with two colleagues, we used our measurements to improve our balancing and derive first thresholds. For this purpose, we used the highest achieved score and multiplied it by 1.2 to get a new, unattainable maximum score for each level. The calculated scores are as follows: Button press 36, Teleportation 96, Selection 59, Rotation 42, Scaling 31, Raycasting 110 and Touching 79.
We calculated Cronbach’s
Next, it was of interest to look at each level score in detail. Due to the extensive nature of the findings, this section focuses exclusively on discussing prominent patterns. A complete overview of all results is provided in Table 3.
Spatial ability shared a significantly positive correlation with all levels but button press. This even surpassed VR experience in hours, which correlated significantly and positively with the levels scale, touch, teleport, rotate, and raycast. In comparison, the experience in frequency of usage only correlated positively and significantly with teleport, rotate, and raycast. Additionally, technology literacy demonstrated a significant positive relationship with the levels scale, rotate, and raycast. Immersive tendency, on the other hand, had significant positive correlations with select, teleport, and rotate. Presence and self-efficacy were only significantly correlated with the orientation level.
As spacial ability was most strongly associated with performance in executing 3D interactions, we explored whether it increases with previous VR experience. The Spearman’s correlation test between spatial ability and VR experience in hours revealed a moderate, non-significant bidirectional relationship (
5.3 Hypotheses testing
The results for the hypotheses are summarized in Table 4.
5.3.1 VR experience
A significant strong positive correlation was found between VR experience in hours (M = 7.08, SD = 6.32) and VR competence score,
The correlation between the number of exposures to VR (M = 10.58, SD = 6.80) and VR competence score was significant and moderately positive,
5.3.2 Spatial ability
A significant strong positive correlation was found between spatial ability (M = 14.56, SD = 3.49) and VR competence score,
5.3.3 Self-efficacy
There was a moderate but non-significant positive correlation between self-efficacy (M = 69.22, SD = 9.98) and VR competence score,
5.3.4 Immersive tendency
A strong positive correlation between immersive tendency (M = 83.67, SD = 12.75) and VR competence score was significant, r = 0.560, p = 0.016, supporting
5.3.5 Presence
The correlation between presence and VR competence score was moderately positive but not significant,
5.3.6 Technology literacy
The correlation between technology literacy (M = 3.86, SD = 0.68) and VR competence score was moderately positive but non-significant, r = 0.337, p = 0.086, thus
6 Discussion
The goals of our research project were twofold. We intended to investigate whether a specific VR competence can be measured. Additionally, we aimed at identifying human abilities and characteristics contributing to a VR competence. In general, our results indicate that individuals differ in their performance to execute 3D interactions in VR, and that our VR application successfully detected these differences between our participants. We further managed to identify human abilities that appear to have a direct connection with the performance of the users.
6.1 VR competence
Using our VR application, we detected individual differences in the participants’ performance in executing the tested 3D interactions. We hypothesized that a higher experience with using VR would improve the VR competence and hence positively affect a user’s performance when executing grounding 3D interactions. The significant correlation between VR experience in hours and VR competence
VR experience measured by number of exposures
In terms of general feedback, participants provided positive feedback on the application. They described the levels as fun and interactive, with some likening them with respect to the mini-games and calling it one of the best VR studies they had participated in so far.
We did an exploratory factor analysis to check dimensionality of our VR competence measurement tool by computing Cronbach’s
As indicated in Table 3, the level for rotating and inserting an item correlated with five out of seven variables. This level combined various interaction metaphors at once, i.e., selection by touch as well as grabbing an object and carefully manipulating it. Hence, the level tests a user’s overall VR competence with respect to interaction with objects and the virtual environment in general. The correlations observed support that our assumptions of a VR competence and its composition of human abilities and characteristics are correct. In contrast, the button press level did not correlate with any variable. This is explainable by the requirements of the task. The layout of a game controller and the distribution of the buttons mainly require hand-eye-coordination and the internalization of the controller’s layout. The users need to spot the highlighted buttons and subsequently press it with the respective finger. In contrast, the tested 3D interaction techniques require users to overcome a certain level of abstraction with respect to representation as well as interaction modality. Also, 3D interaction techniques need to be spatially processed to be used effectively. Yet, when it comes to using VR applications for graded exams, testing a candidate’s hand-eye-coordination and controller layout internalization might still be of importance. Hence, we argue to keep it part of a user’s VR competence assessment.
Although our results are notable and allow researchers and educators to assess the VR competence level of their target group, they also spark future work. Importantly, the question arises as to what extent the demonstrated VR competence influences results of future VR-based exams. Pre-assessing VR competence would enable the correlation of individual VR competence levels with final grades from VR-based examinations. This could further clarify the significance of the aspects spatial orientation and hardware proficiency for exam performance.
6.2 VR performance and human abilities
The strong correlation between spatial ability and VR competence
The lack of a significant correlation between self-efficacy and VR competence
A strong positive correlation was found between immersive tendency and VR competence
Although a relationship between presence and VR competence was hypothesized
The weak, non-significant correlation between technology literacy and VR competence
6.3 Limitations
This study has several limitations. First, the sample mainly consisted of students, predominantly from technology backgrounds and with prior VR experience. This limits the generalizability of the findings to broader populations, particularly those unfamiliar with VR. In addition, our sample size is rather small which might further limit the generalizability. Second, minor bugs within the application, such as unintentional teleportation during level transitions, occasionally disrupted the user experience. In these instances, verbal assistance from the experimenter may have inadvertently reduced user presence. Third, the spatial ability questionnaire proved to be both challenging and time-consuming. This could have contributed to participants reporting simulator sickness even before VR use, potentially affecting their subsequent performance within the VR environment. Fourth, while participants reported high English language proficiency, the assessment instruments in our questionnaire were administered in either German or English, depending on the availability of a validated translated version. Although this approach aimed to prevent issues from non-validated translations, it might have still influenced the accuracy of participants’ self-reports. Finally, all participants completed the levels in an identical sequence, which may have introduced an order bias to the results.
Also, it is important to address the implementation of the VR competence assessment application. While we aimed at creating a general skill assessment, it is important to acknowledge that we are currently testing only a small subset of possible interaction techniques in VR. In the future, it would be ideal to create fitting levels for all possible types of interaction techniques, allowing the examiner to choose the ones relevant for their VR application.
7 Conclusion
The increasing integration of VR into educational contexts provides the opportunity for employing VR applications in graded examinations. Given that effective VR interaction relies on specific human abilities and characteristics, we postulate the existence of a distinct VR competence. VR competence is a subject’s proficiency with VR input/output devices and the capacity of executing interaction metaphors as well as comprehending the information conveyed through them. For VR-based examinations, this inherent VR competence can affect a user’s performance, thereby necessitating its explicit consideration. To investigate and quantify individual VR competence, we designed and developed a novel VR competence assessment application. This application incorporates eight distinct challenges, as illustrated in Figure 1, which are grounded in generic 3D interaction techniques. In a user study involving 18 participants, we systematically measured their performance within this application. We hypothesized that higher VR experience would correlate directly with heightened VR competence. Our analysis revealed a statistically significant positive correlation between participants’ VR experience and their measured VR competence scores. This finding constitutes initial evidence for the validity of our assessment instrument in quantifying an individual’s VR competence level.
To comprehensively explore the constituent elements of VR competence, we additionally administered questionnaires to record participants’ levels of presence, immersive tendency, self-efficacy, technology literacy, and spatial ability. Our analyses demonstrated that spatial ability, and to a lesser extent immersive tendency, were strongly associated with higher VR competence scores. This insight empowers educators and researchers to not only assess but also proactively equalize the VR competence level of their subjects, thus ensuring fairer assessments. Furthermore, these findings provide guidance for designers in developing highly effective tutorials for novice VR users. Our study indicated that superior spatial ability directly enhances VR performance, suggesting the benefit of incorporating a spatial training aspect into the practice of general 3D interaction techniques.
Future work needs to focus on investigating whether the VR competence level influences a candidate’s grade in a VR-based exam. To ensure similar conditions for a VR-based exam, e.g., objective structured clinical examinations, it should be integrated into a curriculum that already uses VR-based learning tools, e.g., emergency simulation training (Mühling et al., 2023). By measuring the candidates’ VR competence level, an in-depth analysis of the potential influences can be conducted. Also, it is of importance to advance the structure of the VR competence score by investigating the aspects of hardware knowledge and orientation in virtual environments. A last research avenue could be investigating VR competence training that takes into account the importance of simultaneous spatial ability training.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
Ethical approval was not required for the studies involving humans because the institute where this research was carried out does not require a ethical approval to conduct immersive VR user studies. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
SO: Conceptualization, Project administration, Writing – review and editing, Methodology, Investigation, Writing – original draft, Supervision. MH: Visualization, Formal Analysis, Writing – original draft, Methodology, Data curation, Investigation, Resources, Software, Writing – review and editing, Conceptualization. TM: Conceptualization, Writing – original draft, Supervision, Writing – review and editing. VS: Writing – review and editing, Conceptualization. SK: Supervision, Writing – review and editing. ML: Supervision, Writing – review and editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This publication was supported by the Open Access Publication Fund of the University of Würzburg.
Acknowledgments
The authors would like to thank Sophia Maier for developing the initial version of the VR competence assessment application.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.
Generative AI statement
The author(s) declare that no Generative AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2025.1608593/full#supplementary-material
References
Abbas, J. R., Chu, M. M. H., Jeyarajah, C., Isba, R., Payton, A., McGrath, B., et al. (2023). Virtual reality in simulation-based emergency skills training: a systematic review with a narrative synthesis. Resusc. Plus 16, 100484. doi:10.1016/j.resplu.2023.100484
Alharbi, S., and Drew, S. (2019). The role of self-efficacy in technology acceptance. Proc. Future Technol. Conf. (FTC) 2018 1, 1142–1150. doi:10.1007/978-3-030-02686-8_85
Banerjee, P., Bochenek, G. M., and Ragusa, J. M. (2002). Analyzing the relationship of presence and immersive tendencies on the conceptual design review process. J. Comput. Inf. Sci. Eng. 2, 59–64. doi:10.1115/1.1486218
Bangor, A., Kortum, P. T., and Miller, J. T. (2008). An empirical evaluation of the system usability scale. Intl. J. Human–Computer Interact. 24, 574–594. doi:10.1080/10447310802205776
Barman, A. (2005). Critiques on the objective structured clinical examination. Ann. Acad. Med. Singap. 34, 478–482. doi:10.47102/annals-acadmedsg.v34n8p478
Bouchard, S., Robillard, G., St-Jacques, J., Dumoulin, S., Patry, M., and Renaud, P. (2004). “Reliability and validity of a single-item measure of presence in VR,” in Proceedings. Second international conference on creating, connecting and collaborating through computing (Ottawa, Ont., Canada: IEEE), 59–61. doi:10.1109/HAVE.2004.1391882
Bouchard, S., St-Jacques, J., Robillard, G., and Renaud, P. (2008). Anxiety increases the feeling of presence in virtual reality. Presence 17, 376–391. doi:10.1162/pres.17.4.376
Brooke, J. (1996). “SUS-A quick and dirty usability scale,” in Usability evaluation in industry (London, England: Publisher), 189, 4–7.
Brown, P., Spronck, P., and Powell, W. A. (2022). The simulator sickness questionnaire, and the erroneous zero baseline assumption. Front. Virtual Real. 3. doi:10.3389/frvir.2022.945800
Bruder, G., and Steinicke, F. (2014). “Threefolded motion perception during immersive walkthroughs,” in Proceedings of the 20th ACM symposium on virtual reality software and technology (New York, NY, USA: Association for Computing Machinery), 177–185. doi:10.1145/2671015.2671026
Darsono, L. I. (2005). Examining information technology acceptance by individual professionals. Gadjah Mada Int. J. Bus. 7, 155–178. doi:10.22146/gamaijb.5576
Dede, C. J. (2009). Immersive interfaces for engagement and learning. Science 323, 66–69. doi:10.1126/science.1167311
Drey, T., Montag, M., Vogt, A., Rixen, N., Seufert, T., Zander, S., et al. (2023). “Investigating the effects of individual spatial abilities on virtual reality object manipulation,” in Proceedings of the 2023 CHI conference on human factors in computing systems (New York, NY, USA: Association for Computing Machinery). doi:10.1145/3544548.3581004
Dugger, W. E. (2001). Standards for technological literacy. Phi Delta Kappan 82, 513–517. doi:10.1177/003172170108200707
Fernes, D., Oberdörfer, S., and Latoschik, M. E. (2023). “Work, trade, learn: developing an immersive serious game for history education,” in 2023 9th international conference of the immersive learning research network (iLRN).
Foley, J. D., Wallace, V. L., and Chan, P. (1984). The human factors of computer graphics interaction techniques. IEEE Comput. Graph. Appl. 4, 13–48. doi:10.1109/mcg.1984.6429355
Freina, L., and Ott, M. (2015). “A literature review on immersive virtual reality in education: state of the art and perspectives,” in Proceedings of the 11th eLearning and software for education (eLSE ’15) (bucharest, Romania).
Gittinger, M., and Wiesche, D. (2023). Systematic review of spatial abilities and virtual reality: the role of interaction. J. Eng. Educ. 113, 919–938. doi:10.1002/jee.20568
Golledge, R. G. (1999). Wayfinding behavior: cognitive mapping and other spatial processes. Baltimore, Maryland JHU press.
Gonçalves, G., Meirinhos, G., Melo, M., and Bessa, M. (2023). Correlational study on novelty factor, immersive tendencies, purchase intention and memory in immersive VR e-commerce applications. Sci. Rep. 13, 11407. doi:10.1038/.s41598-023-36557-8
Grassini, S., Laumann, K., and Rasmussen Skogstad, M. (2020). The use of virtual reality alone does not promote training performance (but sense of presence does). Front. Psychol. 11, 1743. doi:10.3389/fpsyg.2020.01743
Griffiths, G., Nichols, N., and Wilson, S. (2006). Performance of new participants in virtual environments: the Nottingham tool for assessment of interaction in virtual environments (NAÏVE). Int. J. Human-Computer Stud. 64, 240–250. doi:10.1016/j.ijhcs.2005.08.008
Han, J., Zheng, Q., and Ding, Y. (2021). Lost in virtual reality? cognitive load in high immersive vr environments. J. Adv. Inf. Technol. 12, 302–310. doi:10.12720/jait.12.4.302-310
Holcomb, L. B., King, F. B., and Brown, S. W. (2004). Student traits and attributes contributing to success in online courses: evaluation of university online courses. J. Interact. Online Learn. 2, 1–17.
Hsu, C. C., Chen, Y. L., Lin, C. Y., and Lien, Wc (2022). Cognitive development, self-efficacy, and wearable technology use in a virtual reality language learning environment: a structural equation modeling analysis. Curr. Psychol. 41, 1618–1632. doi:10.1007/s12144-021-02252-y
Ismail, M., Kannangara, N., and Meddage, D. (2024). Impact of computer literacy on academic performance of higher national diploma students excluding HNDIT. Int. J. Res. Publ. Rev. 5, 293–316.
Junga, A., Kockwelp, P., Valkov, D., Schulze, H., Bozdere, P., Hätscher, O., et al. (2025). Teach the unteachable with a virtual reality (VR) brain death scenario – 800 students and 3 Years of experience. Perspect. Med. Educ. 14, 44–54. doi:10.5334/pme.1427
Karrer, K., Glaser, C., Clemens, C., and Bruder, C. (2009). Technikaffinität erfassen - der fragebogen TA-EG. Der Mensch Im. Mittelpkt. Tech. Syst. 8, 196–201.
Keicher, F., Backhaus, J., König, S., and Mühling, T. (2024). Virtual reality for assessing emergency medical competencies in junior doctors – a pilot study. Int. J. Emerg. Med. 17, 125. doi:10.1186/s12245-024-00721-2
Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). Simulator Sickness Questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3, 203–220. doi:10.1207/s15327108ijap0303_3
Khashe, S., Becerik-Gerber, B., Lucas, G., and Gratch, J. (2018). “Persuasive effects of immersion in virtual environments for measuring pro-environmental behaviors,” in Proceedings of theISARC (2018). doi:10.22260/isarc2018/0167
Kim, H. Y., and Kim, E. Y. (2023). Effects of medical education program using virtual reality: a systematic review and meta-analysis. Int. J. Environ. Res. public health 20, 3895. doi:10.3390/ijerph20053895
Klus, C., Krumm, K., Jacobi, S., Willemer, M. C., Daub, C., Stoevesandt, D., et al. (2024). External post-mortem examination in virtual reality – scalability of a monocentric application. Int. J. Leg. Med. 138, 1939–1946. doi:10.1007/s00414-024-03229-9
Krassmann, A. L., Melo, M., Peixoto, B., Pinto, D., Bessa, M., and Bercht, M. (2020). “Learning in virtual reality,” in Investigating the effects of immersive tendencies and sense of presence (Cham: Springer), 270–286. doi:10.1007/978-3-030-49698-2∖.{balamurugan}18
Lampton, D. R., Knerr, B. W., Goldberg, S. L., Bliss, J. P., Moshell, J. M., and Blau, B. S. (1994). The virtual environment performance assessment Battery (VEPAB):development and evaluation. Presence Teleoperators Virtual Environ. 3, 145–157. doi:10.1162/pres.1994.3.2.145
LaViola, J. J. (2017). 3D user interfaces: theory and practice. second edition edn. Boston: Addison-Wesley.
Lewis, W., Agarwal, R., and Sambamurthy, V. (2003). Sources of influence on beliefs about information technology use: an empirical study of knowledge workers. MIS Q. 27, 657. doi:10.2307/30036552
Liu, J. Y. W., Yin, Y. H., Kor, P. P. K., Cheung, D. S. K., Zhao, I. Y., Wang, S., et al. (2023). The effects of immersive virtual reality applications on enhancing the learning outcomes of undergraduate health care students: systematic review with meta-synthesis. J. Med. Internet Res. 25, e39989. doi:10.2196/39989
Maddux, J. E., and Kleiman, E. M. (2016). “Self-efficacy: a foundational concept for positive clinical psychology,” in The Wiley handbook of positive clinical psychology, 89–101.
Makransky, G., and Klingenberg, S. (2022). Virtual reality enhances safety training in the maritime industry: an organizational training experiment with a non-weird sample. J. Comput. Assisted Learn. 38, 1127–1140. doi:10.1111/jcal.12670
Makransky, G., and Lilleholt, L. (2018). A structural equation modeling investigation of the emotional value of immersive virtual reality in education. Educ. Technol. Res. Dev. 66, 1141–1164. doi:10.1007/s11423-018-9581-2
Maneuvrier, A., Decker, L. M., Ceyte, H., Fleury, P., and Renaud, P. (2020). Presence promotes performance on a virtual spatial cognition task: impact of human factors on virtual reality assessment. Front. Virtual Real. 1. doi:10.3389/frvir.2020.571713
Mergen, M., Will, L., Graf, N., and Meyerheim, M. (2024). Feasibility study on virtual reality-based training for skin cancer screening: bridging the gap in dermatological education. Educ. Inf. Technol. 30, 5251–5282. doi:10.1007/.s10639-024-13019-w
Mom, C. I. D. (2008). Mom, let me play more computer games: they improve my mental rotation skills. Sex. Roles 59, 776–786. doi:10.1007/s11199-008-9498-z
Mousavi, S. M. A., Powell, W., Louwerse, M. M., and Hendrickson, A. T. (2023). Behavior and self-efficacy modulate learning in virtual reality simulations for training: a structural equation modeling approach. Front. Virtual Real. 4, 1250823. doi:10.3389/frvir.2023.1250823
Mühling, T., Schreiner, V., Appel, M., Leutritz, T., and König, S. (2025). Comparing virtual reality–based and traditional physical objective structured clinical examination (osce) stations for clinical competency assessments: randomized controlled trial. J. Med. Internet Res. 27, e55066. doi:10.2196/55066
Mühling, T., Späth, I., Backhaus, J., Milke, N., Oberdörfer, S., Meining, A., et al. (2023). Virtual reality in medical emergencies training: benefits, perceived stress, and learning success. Multimed. Syst. doi:10.1007/s00530-023-01102-0
Nash, E. B., Edwards, G. W., Thompson, J. A., and Barfield, W. (2000). A review of presence and performance in virtual environments. Int. J. Human-Computer Interact. 12, 1–41. doi:10.1207/s15327590ijhc1201_1
Naz, F. L., Raheem, A., Khan, F. U., and Muhammad, W. (2022). An effect of digital literacy on the academic performance of university-level students. J. Posit. Sch. Psychol. 6, 10720–10732.
Neher, A. N., Bühlmann, F., Müller, M., Berendonk, C., Sauter, T. C., and Birrenbach, T. (2025). Virtual reality for assessment in undergraduate nursing and medical education – a systematic review. BMC Med. Educ. 25, 292. doi:10.1186/.s12909-025-06867-8
Oberdörfer, S., Fischbach, M., and Latoschik, M. E. (2018). “Effects of ve transition techniques on presence, ivbo, efficiency, and naturalness,” in Proceedings of the 6th symposium on spatial user interaction (SUI ’18) (New York, NY, USA: Association for Computing Machinery), 89–99. doi:10.1145/3267782.3267787
Oberdörfer, S., and Latoschik, M. E. (2019). Knowledge encoding in game mechanics: transfer-oriented knowledge learning in desktop-3d and vr. Int. J. Comput. Games Technol. 2019, 1–17. doi:10.1155/2019/7626349
Pellegrino, J. W., Alderton, D. L., and Shute, V. J. (1984). Understanding spatial ability. Educ. Psychol. 19, 239–253. doi:10.1080/00461528409529300
Peters, M., Laeng, B., Latham, K., Jackson, M., Zaiyouna, R., and Richardson, C. (1995). A redrawn vandenberg and kuse mental rotations test - different versions and factors that affect performance. Brain Cognition 28, 39–58. doi:10.1006/brcg.1995.1032
Rizzo, A., Kim, G. J., Yeh, S. C., Thiebaux, M., Hwang, J., and Buckwalter, J. G. (2005). “Development of a benchmarking scenario for testing 3D user interface devices and interaction methods,” in Proceedings of the 11th international conference on human computer interaction (las vegas, NV).
Rose, T., and Chen, K. B. (2018). Effect of levels of immersion on performance and presence in virtual occupational tasks. Proc. Hum. Factors Ergonomics Soc. Annu. Meet. 62, 2079–2083. doi:10.1177/1541931218621469
Rummel, B. (2016). “System Usability Scale – jetzt auch auf Deutsch,” in Section: additional b logs by SAP.
Seufert, C., Oberdörfer, S., Roth, A., Grafe, S., Lugrin, J. L., and Latoschik, M. E. (2022). Classroom management competency enhancement for student teachers using a fully immersive virtual classroom. Comput. and Educ. 179, 104410. doi:10.1016/j.compedu.2021.104410
Sharp, J. H. (2007). Development, extension, and application: a review of the technology acceptance model. Inf. Syst. Educ. J. 5.
Skarbez, R., Brooks, F. P., and Whitton, M. C. (2017). A survey of presence and related concepts. ACM Comput. Surv. 50, 1–39. doi:10.1145/3134301
Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Trans. R. Soc. B 364, 3549–3557. doi:10.1098/rstb.2009.0138
Slater, M. (2017). “Implicit learning through embodiment in immersive virtual reality,” in Virtual, augmented, and mixed Realities in education. Editors D. Liu, C. J. Dede, R. Huang, and J. Richards (Smart Computing and Intelligence). singapore:springer.
Slater, M., Linakis, V., Usoh, M., and Kooper, R. (1996). “Immersion, presence, and performance in virtual environments: an experiment with tri-dimensional chess,” in Proceedings of the ACM symposium on virtual reality software and technology (VRST ’96) (Hong Kong: acm), 163–172. doi:10.1145/3304181.3304216
Slater, M., and Wilbur, S. (1997). A framework for immersive virtual environments (five): speculations on the role of presence in virtual environments. Presence 6, 603–616. doi:10.1162/pres.1997.6.6.603
Stevens, J. A., and Kincaid, J. P. (2015). The relationship between presence and performance in virtual simulation training. Open J. Model. Simul. 3, 41–48. doi:10.4236/ojmsi.2015.32005
Teo, T. (2009). Modelling technology acceptance in education: a study of pre-service teachers. Comput. and Educ. 52, 302–312. doi:10.1016/j.compedu.2008.08.006
Vandenberg, S. G., and Kuse, A. R. (1978). Mental Rotations, a group test of three-dimensional spatial visualization. Percept. Mot. Ski. 47, 599–604. doi:10.2466/pms.1978.47.2.599
Weißker, T., Kunert, A., Fröhlich, B., and Kulik, A. (2018). “Spatial updating and simulator sickness during steering and jumping in immersive virtual environments,” in 2018 IEEE conference on virtual reality and 3D user interfaces (VR), 97–104. doi:10.1109/VR.2018.8446620
Witmer, B. G., and Singer, M. J. (1998). Measuring presence in virtual environments: a presence questionnaire. Presence Teleoperators Virtual Environ. 7, 225–240. doi:10.1162/105474698565686
Keywords: virtual reality, VR, skill assessment, spatial ability, self-efficacy, immersive tendency, technology literacy, VR competence
Citation: Oberdörfer S, Heinisch M, Mühling T, Schreiner V, König S and Latoschik ME (2025) Ready for VR? Assessing VR competence and exploring the role of human abilities and characteristics. Front. Virtual Real. 6:1608593. doi: 10.3389/frvir.2025.1608593
Received: 09 April 2025; Accepted: 30 June 2025;
Published: 26 August 2025.
Edited by:
Caitlin R. Rawlins, United States Department of Veterans Affairs, United StatesReviewed by:
Matias Volonte, Northeastern University, United StatesMegan Rumzie, United States Department of Veterans Affairs, United States
Copyright © 2025 Oberdörfer, Heinisch, Mühling, Schreiner, König and Latoschik. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Sebastian Oberdörfer , c2ViYXN0aWFuLm9iZXJkb2VyZmVyQHVuaS13dWVyemJ1cmcuZGU=
†These authors have contributed equally to this work and share first authorship