- 1Department of Bioengineering, Santa Clara University, Santa Clara, CA, United States
- 2XR Safety Inteligence, San Francisco, CA, United States
- 3Department of Computer Science and Engineering, Santa Clara University, Santa Clara, CA, United States
Introduction: Immersive technologies enabled by AI present latent risks to human subjects’ protections in research settings. Standard methods of ensuring confidentiality, data management, and safety do not fully encompass the scope of data types, data flow, and user experience. A comprehensive take on risk assessment and mitigation strategy is warranted.
Methods: 100 research compliance officers analyzed three case studies (Biodata, Haptics, Motion Tracking) in a structured sequence (Open Inquiry, Technical Analysis, Risk Assessment, Mitigation Strategies). The text responses to the questions were thematically coded and summarized in the context of the review criteria (Data Management, Informed Consent, Safety, and Training).
Results: Biodata case study presented challenges in psychological safety that could be mitigated with improved withdrawal procedures. Haptics introduced novel physical safety concerns with direct brain stimulation, which called for thorough training of study personnel. Motion-tracking exposed the difficulty of anonymization, which may require enhanced data security measures in the data management plan.
Discussion: The technical depth of knowledge impacted compliance officers’ analysis of the case studies. Yet when informed, the adapted application of human subjects’ protections policies could capture much of the latent risk. The proposed review criteria establish a structure to assess research practices with immersive technology.
Introduction
Standards and policies regarding emerging technologies that interface with users and harvest sensitive user data are still under development (Digital Regulation and Cooperation Forum, 2022; Internal Market and Consumer Protection, 2024). Emerging technologies of discussion here are immersive technologies (extension of perception to create presence and immersion mediated by digital devices), biosensors (physical, physiological, or behavioral data collected from the body), and Artificial Intelligence (AI, machine/software capable of taking human-like actions based on data without explicit programming). Upstream of policy consensus, the research community is using and creating these technologies and, in the process, conducting human subjects research. This work crosses boundaries with industry and academia, such as the Meta-sponsored XR Program and Research Funds (European Metaverse Research Network, 2023; Meta, 2022) or Sony Research Awards Program (Sony, 2024). These new paradigms of experimental design and research collaboration models challenge the norms of human subjects’ protections and data privacy practices. To come to terms with this reality, assessments in technology sophistication, research practices, and compliance processes need to be weaved together into a robust framework usable by all stakeholders.
AI and Immersive technology individually are both rapidly evolving fields, making it difficult for research compliance offices to keep up to date. Over the past 5 years, new immersive devices have been announced every few months, while advancements in AI capabilities and functionalities are occurring at an even faster rate. What is even more challenging for compliance officers and researchers alike is understanding the emergent risks when AI is applied to immersive technologies for their core functions, yet simultaneously leaving a vulnerability for data misuse. Latent risks to data generated by AI-enabled immersive technology can unravel the typical safeguards researchers use to protect participant data. For example, re-identification of participants from the embodied data. It is possible with AI to predict an individual dataset’s identity out of 50,000 based on less than 15 s of controller position data playing the virtual reality (VR) game Beat Saber (Nair et al., 2023a). Similarly, resting-state EEG data may be as a unique identifier of a person, signifying their “brainprint” (Wang et al., 2020; Wu et al., 2022; Yang et al., 2022). Researchers are collecting data that has the potential to identify their participants even if they never intend to analyze their data in that manner. Yet, safeguarding sensitive data requires knowledge of best practices and an infrastructure to ensure data privacy and security, which is not typical in research code of conduct documents.
AI used in experimental design also presents a risk to psychological safety in the context of immersive technology. The presence and immersion induced result in realistic feelings about the participant’s experience (Diemer et al., 2015; Slater et al., 2009). The level of engagement and embodied experience also supports learning and memory (PwC, 2019; Rubo et al., 2021). Thus when the subject matter of the research is provocative, the risk of psychological distress is magnified. In the context of simulation of moral dilemmas, the research community has not reached a consensus on whether it is an ethical research practice, as illustrated in the VR trolley problem (Ramirez and LaBarge, 2020; Rueda, 2022). To address some research questions, psychological stress is essential and the stimulus causing the stress may be dynamically adjusted by a closed-loop system responding to the participants’ reactions. If the adaptive algorithm is not carefully designed, the intensity of the stimulus may extend beyond tolerable levels before a facilitator may be able to intervene in response to overt participant distress. Researchers must be aware of the risks and have processes for simulating testing prior to human testing.
A new variable in study design is the use of generative AI in creating the testing environment and stimuli, which can be unpredictable in their output. Generative AI is being implemented into VR user interaction, such that the virtual environment may be created on the fly based on user prompts (Chamola et al., 2023; Robotics and Lv, 2023). Consider the scenario whereby the research question uses this functionality in an open-source generative AI tool to investigate the impact of personalized environments on an educational application. If a minor participant gives a prompt that unexpectedly creates a distressing virtual environment, the use of generative AI puts into question who is responsible for that adverse event. The researcher did not put boundaries on what the model could create, yet it was effectively user-generated content analogous to drawing a picture. A compliance officer would be faced with making the determination, based on their AI policies.
Immersive technologies intersect with the research community in two primary ways. First, in the fields of engineering, they are creating the devices and infrastructure enabling the technology and the AI engines that process and interpret the data collected on them. Second, by behavioral researchers who utilize virtual environments as an experimental platform to observe and manipulate participant behavior. The tools developed and used by researchers that necessarily engage human subjects have become more intrusive data collection systems and more convincing reality generators than past generations of immersive technology (Hamad and Jia, 2022). For example, prior to 2018, a VR device was a closed-system tethered to a computer. Now Extended Reality (XR) devices are wireless internet-connected devices that track where a user is and how they behave. XR is embodied, spatial, AI-enabled, and connected to the internet. These fundamental changes call for the clear imperative to develop and disseminate a research code of conduct for immersive technology research. As of now, no such definitive resource exists, leaving dangerous variations in the level of IRB review and lab practices.
To address the gap in guidance for human subjects’ protections, this study derived a checklist of review areas unique to XR and AI research based on XR privacy and safety risks and analysis of diverse case studies that challenged internal review board (IRB) norms. The case studies each use immersive technologies, biosensing, or AI in novel manners that do not obviously map to existing policies for human subject protections. To test the feasibility of this guidance, the team partnered with Public Responsibility in Medicine and Research (PRIM&R) and the Santa Clara University (SCU) Markkula Center for Applied Ethics. The novel resources were evaluated by attendees of PRIM&R workshops. The first virtual workshop in July 2024 had 67 participants. At the in-person meeting in November 2024, over 100 research compliance officers from public and private universities, military research institutions, and public health departments attended. The focus of the workshop was to educate compliance officers on human subjects protections related to AI and immersive technologies and to conduct structured analyses of case studies with risks in biodata privacy and psychological safety, physical safety risks in brain stimulation haptics, and re-identification risks of motion tracking. The analysis of the participant inputs to survey questions and case analysis assessed the current understanding of XR and AI technologies by this population and tested the usability of the risk assessment framework for IRB processes.
Methods
The methodology to assess novel risks in immersive technology research is first introduced. This is followed by a description of the workshop cohort data sources. Lastly, qualitative analysis methods for thematic synthesis of the participant responses are stated.
Review criteria
The development of protocol review criteria by the authors were shaped by three priorities. The first goal was transparency in the collection, use, and storage of data. This provides committees with the knowledge to evaluate the privacy risks to the participants. The second goal was to inform the participant of the planned and potential use of the data and provide choices on levels of consent. The third goal was to ensure the safety of participants through research team empirical risk assessment and mitigation as well as relevant supplemental training beyond basic Human Subjects Research certification. The authors consulted standards in privacy and safety for the technologies used in commercials settings [e.g., (XRSI, 2020; McGill, 2021)], as well as norms for Human Subjects Research as demonstrated in the Common Rule (45 CFR part 46 by US Health and Human Services) and EU iRECS (Aucouturier and Grinbaum, 2023; XR in Research: A Case Study, 2022).
The research informed suggested additional levels of review for human subjects’ protections in XR research (Table 1). The review criteria organized into data management, informed consent, personnel training, and safety. Within each category, specific inputs are requested for the protocol review. The IRB committee may then determine whether the information provided meets compliance and ethical standards. The reductionist approach forces transparency at the level that will allow reviewers to align the information with existing processes (e.g., data types, user account terms). Secondarily, the list puts the responsibility on the submitter to synthesize what the emergent risks are when those individual parameters are put together (e.g., AI inferences, future use). A more detailed description of the review criteria is in the Supplementary Material.
Table 1. Review criteria for study protocols and sample rankings of level of relevance in broad study types by Expert and Lerner assessments.
Data collection
Responses to large group polling and small group discussion notes were anonymously collected from attendees of the “Immersive Technologies and Human Subjects Protections” workshop at the 2024 PRIM&R Annual Conference (Seattle, WA). The attendees were IRB compliance officers from many institution types (e.g., Higher Education, Military, Public Health Agencies) and from numerous countries (e.g., United States, Australia, UAE). Approximately 100 people attended the workshop. The studies involving human participants were reviewed and approved by Santa Clara University IRB as secondary data use. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements.
Polling
Participants individually responded to polling questions during the facilitation before and after the case study analysis. The Mentimeter polling service (www.mentimeter.com) delivers a dynamic web link for real-time collection of responses. The anonymous responses are displayed on the host’s screen for viewing by the whole group. 54%–66% of participants responded to the polling questions. Open text responses were labeled by their associations with keywords. If the response was not associated with a keyword, it was labeled “Other.” Responses like “I do not know” and with no relevance to technology were labeled as “Unknown.”
Case studies
Participants worked in groups of 8–10 through structured case studies on a dynamic webpage (archbee.com). The case studies were divided into four analytical domains, with 15 min for each--Open Inquiry, Technical Analysis, Risk Assessment, Mitigation Strategy--and discussed for a total of 60 min. Each section had specific, targeted questions on the information provided. These were not exhaustive analyses. One participant served as the notetaker and entered the group’s response in an online form (typeform.com).
Three case studies were analyzed by 10 groups. Each case study had a different research theme: biodata (4 groups) (Daniel-Watanabe et al., 2025), haptics (3 groups) (Tanaka et al., 2024), and motion tracking (3 groups) (unpublished). Researchers provided study protocols, informed consent documents, published works, and interviews to develop each case study. The case studies, as presented to the participants, are available at www.scu.edu/ethics/focus-areas/bioethics/resources/immersive-technologies-and-human-subjects-protections/.
Data analysis
A structured qualitative analysis examined workshop responses from multiple participant groups across the four segments of each case study analysis (Open Inquiry, Technical Analysis, Risk Assessment, Mitigation Strategy). The analysis was conducted with Claude 3.5 Sonnet. The analysis followed a three-stage process for each segment: content aggregation, thematic analysis, and structural organization. The process applied is documented in the Supplementary Material.
Content aggregation
First, workshop responses from all groups were compiled into a CSV file and organized according to the predefined question sets within each case study. This initial aggregation established a comprehensive dataset while maintaining the original context of participant discussions. This dataset was the input provided to the model.
Thematic analysis
Using a consistent prompt structure for each case study and section, a systematic review of the aggregated responses identified recurring themes, points of consensus and divergence, and unique perspectives across groups. This process emphasized the preservation of respondent viewpoints without introducing external interpretation. Key themes were documented using the respondents’ own terminology to maintain authenticity. These were then mapped to the predefined risk categories based on keyword matching.
Structural organization
Lastly, themes were organized into an outline that ranked the frequency of occurrence and relevance as determined from responses (e.g., use of very or most compared to maybe or small), which ensured accessible and logical flow for researchers to review. This organization maintained consistent prompt instructions across all four analytical domains to produce comparable outputs for each of the case studies. This process aligns with manual thematic coding analysis for which semantic associations form clusters, which are then assigned themes (Williams and Moser, 2019).
Several quality control measures by researchers were implemented to ensure analytical rigor. The verification step cross-referenced against original responses to preserve an accurate and balanced representation of participant inputs. In the analysis, uniform formatting and hierarchical structure were applied and professional language style was selected. The analysis adhered to strict boundaries (e.g., limiting analysis by segment and mapping to risk categories) to maintain validity and did not draw on external sources of data. This methodological approach ensured a systematic, transparent, and reproducible analysis of the workshop responses while maintaining the integrity of attendee contributions. A researcher naive to the analysis process read the original responses then the final outputs to confirm or revise the thematic analysis.
Results
Polling
In this cohort, 54% of those who responded had a general understanding of XR technology, as shown by reporting terms associated with the field (Figure 1A). Of the fraction that had reviewed protocols with XR elements, the predominant concerns were about data protection–confidentiality, privacy, security, ownership, and analytics (Figure 1B). These issues are linked to both AI applications and data management plans. When asked to select the primary area of concern with XR research practices, informed consent was ranked highest (39%), followed by safety (35%) and data management (26%). The challenge arose in how to disclose the multidimensional risks to participants. Safety was a persistent issue as the experiments included brain stimulation and extreme stress environments. Training of researchers was not considered a primary issue, yet it was seen as a way to mitigate safety risks and practice responsible data management.
Figure 1. Workshop Survey. Research compliance participants’ responses to the polling questions (A) What is XR? and (B) What issues have you encountered in reviewing XR study protocols?
Case studies
All three case studies used in the workshop may be requested from the authors. Here, the study is summarized and the participant responses are synthesized based on the specific questions for each section. The categorical risk analysis by the authors and participants is compared in Table 1. The judgments of low, medium or high risk were based on the summative responses in all segments of the analysis. Low was designated if the category was not mentioned or not prioritized over other categories. Medium was marked when the issue was brought up across more than one question in at least one group, but not given a high priority. High was indicated when observed in multiple groups and emphasized in the discussion.
Case study 1: biodata
Study description
This study investigates the use of biofeedback mechanisms to control physiological signals, such as heart rate variability (HRV) and breathing rate, in response to stressful situations--a horror scenario. Using a VR setup, participants were trained to regulate their physiological responses in a non-stressful environment, followed by a session in a stress-inducing VR environment where they must apply these techniques. The goal of the study was to assess whether biofeedback training can help individuals manage their physiological responses to stress in real-time. The broader objective was to explore the applicability of biofeedback training for managing stress and anxiety in real-world scenarios, with a potential focus on clinical applications such as anxiety management and mental health therapy. The study was carried out in a Department of Psychiatry with a physician-scientist principal investigator and Psychology researchers.
Open inquiry
The groups were asked to discuss the ethical challenges in relation to psychological safety and consent. The participants raised concerns about psychological safety and data management. Calibration of the stress-inducing virtual environments was seen as the most important design issue for human subjects’ protections. Participants regularly expressed significant ethical concerns about psychological safety and data integrity during the stress protocols. Respondents expressed particular apprehension regarding the adequacy of informed consent procedures, with many noting that pre-study descriptions failed to convey the intensity of VR-induced stress experiences. Participant screening protocols were also identified as problematic, suggesting more comprehensive exclusion criteria for individuals with pre-existing conditions. The data revealed substantial concerns about measurement validity, as participants questioned the effect of brief acclimatization periods between experimental conditions, potentially affecting the reliability of subsequent physiological measurements.
Technical analysis
The groups were provided a summary of study procedures and asked to discuss the research participant experience and the data use and management. The technical review identified critical considerations for balancing effective stress induction with participant safety protocols. Assessment of calibration methodologies highlighted the implementation of a maximum intensity threshold during pilot testing as a foundational safety parameter, though concerns emerged regarding demographic representation within pilot groups. Subjective stress scaling showed notable variability across age demographics, suggesting the need for age-stratified calibration protocols.
Participant monitoring frameworks were evaluated for effectiveness and identified an absence of physiological safety thresholds, particularly maximum heart rate parameters. The research team identified gaps in emergency response protocols and standardized procedures for managing adverse events. While the group scored moderately on state-trait anxiety inventories, the assessment of acute stress was fully subjective and did not consistently align with the physiological measures (Daniel-Watanabe et al., 2025). This finding puts into question whether real-time physiological monitoring alone may be a declarative indicator of acute distress.
Competency in the breathing relaxation technique varied across research participants, which was seen as a risk in safety during the stress exposure. Participants who demonstrated mastery of biofeedback-guided breathing techniques exhibited lower peak heart rates and greater change in HRV during stress scenarios compared to those with minimal training. The findings supported the implementation of progressive advancement criteria based on individual readiness evaluations rather than standardized timeframes.
Respiration and pulse were collected from participants. These were identified as potentially sensitive and re-identifiable data types. To date, these data types have not been shown to be unique and rich enough to identify individuals--in contrast to Electrocardiograms (ECG) (Agrawal et al., 2023). The technical review identified privacy vulnerabilities requiring enhanced data protection protocols, particularly for integrated physiological and behavioral response data.
Risk assessment
The focus of this analysis was on the informed consent process and the maintenance of data privacy. The risk review identified significant concerns regarding participant safety and informed consent adequacy. All review groups emphasized the critical importance of comprehensive psychological and physiological monitoring protocols, with a particular focus on VR-specific challenges that traditional research safeguards might not address. The assessment revealed that the movie rating comparison inadequately represented the potential intensity of VR experiences, undermining participants’ ability to provide truly informed consent.
Reviewers universally highlighted the necessity for clearly defined withdrawal mechanisms and emergency response protocols, which were vague in the consent form. The absence of vital sign monitoring thresholds and undefined real-time withdrawal procedures were identified as critical safety gaps requiring immediate remediation. Motion sickness protocols and participant positioning requirements were similarly undefined, creating additional physical safety concerns.
The informed consent process was deemed insufficient by multiple review groups, particularly regarding the description of potential stress levels. Reviewers noted significant disagreement regarding the adequacy of technical language clarity in consent materials and varying standards for what constituted appropriate disclosure. The compensation structure (approximately $15) also generated divided opinions regarding fairness relative to potential participant distress.
Data privacy considerations emerged as a substantial risk area, with universal concern about re-identification risks associated with multi-sensor biometric data collection. Reviewers emphasized that the current data management framework lacked clear de-identification procedures and explicit consent mechanisms for potential data sharing. The assessment highlighted unaddressed vulnerabilities in vendor agreements, long-term storage protocols, and commercial use restrictions.
Mitigation strategy
The groups were provided with the mitigation strategies employed in the study, then asked for further recommendations. The review of mitigation strategies revealed that researchers implemented several foundational protections including a maximum intensity rating system for stress scenarios, real-time physiological monitoring, post-study psychological support, and equipment safety protocols. The analysis found other areas to improve safeguards for participants.
Inclusion and exclusion criteria thoroughness was flagged across all review groups. Participants emphasized the need for VR demonstration experiences during the consent process to provide realistic exposure expectations. There was concern that the study did not exclude a history of anxiety, depression, or PTSD, yet all participants were assessed on a standardized trait anxiety inventory and had moderate anxiety traits on average (Daniel-Watanabe et al., 2025). Multiple groups advocated for implementing a gradual stress escalation approach rather than immediate immersion in high-intensity scenarios, while the inadequacy of pre- and post-psychological screening protocols was universally noted.
Safety mechanism deficiencies emerged as a primary concern, with all workshop groups emphasizing the need for clearly defined withdrawal procedures during active VR sessions. Suggestions included implementing automatic shutdown protocols triggered by elevated vital signs, developing intuitive physical “tap out” systems accessible during immersion and establishing clear verbal safe word processes. The current approach was deemed insufficient for ensuring participants’ psychological safety during acute distress. However, the weak correlation between physiological metrics and subjective stress suggests this may not be more effective than participant-initiated withdrawal.
Long-term monitoring recommendations focused on establishing regular participant check-ins with structured follow-up protocols for addressing adverse effects and implementing specialized staff training for VR-specific psychological responses. The analysis of VR versus flat-screen psychological impacts found consensus that immersive environments potentially generate heightened psychological responses requiring additional safeguards.
Case study 2: haptics
Study description
This case study explored the development, implementation, and application of a haptic interface system utilizing Transcranial Magnetic Stimulation (TMS) to generate virtual touch sensations. The research study aimed to develop a non-invasive brain stimulation technique called source effector haptics to enhance VR experiences. The stimulation of the brain region specific to the body part (e.g., right hand) engaged in the user interaction was synchronized with the visual effects. The stimulation was a brief 10 m pulse, with 10 pulse-interaction pairings per experimental session. Prior to the haptics study, a TMS calibration study was conducted. This study focused on characterizing the range of haptic sensations that could be induced by stimulating different areas of the sensorimotor cortex. A grid search technique was employed to determine the motor and sensory thresholds for various body parts, which required hundreds of TMS pulses per session. This calibration was critical for setting the parameters required for effective haptic feedback. Operators of this novel system were computer science graduate students, who were trained in basic first aid and were trained to use the TMS device by researchers in the Psychology department.
Open inquiry
The groups were asked to discuss the ethical challenges of the study design, in particular the coverage of novel risks in the consent process. Workshop participants consistently identified precision limitations of TMS technology when repurposed from clinical settings to research. Workshop responses emphasized potential long-term neurological implications of brain stimulation that extend beyond the immediate research objectives, with particular attention to regulatory compliance requirements typically associated with medical-grade interventions.
Investigation of informed consent adequacy revealed a nuanced perspective among participants. While the general consent framework was deemed sufficient for adult participants, responses indicated critical gaps in addressing the novel risks introduced by TMS-VR integration. Participants emphasized the need for explicit documentation regarding the combinatorial effects of simultaneous brain stimulation and immersive virtual environments, particularly concerning off-target neurological effects.
Three independent groups analyzed the case study and differed in their perception of the most important risks. Technical precision and long-term effects dominated one group’s concerns, while neurological safety was prioritized by another. Another group suggested minimal TMS risks when implemented according to established parameters, yet emphasized that VR itself introduced separate considerations requiring distinct mitigation strategies. This divergence in risk prioritization demonstrates the multifaceted ethical landscape surrounding non-medical applications of brain stimulation technologies.
Technical analysis
After reviewing the study procedures, the groups were asked to identify the safety measures implemented. Workshop participants reported significant concerns regarding electromagnetic interference between devices and psychological impact monitoring. They noted existing safety protocols, including overstimulation prevention guidelines and metal-related precautions, but identified several technical gaps requiring attention. Participants highlighted calibration uncertainties, the absence of EEG monitoring for brain activity, and inadequate robotic TMS tracking management.
The group identified critical deficiencies in psychological impact assessment, emphasizing limited protocols for monitoring unintended effects and inadequate information regarding session frequency and long-term observation. When TMS is used in single sessions according to normal operations, long-term effects have not been observed, except those with epilepsy--a strict exclusion criterion (Rossi et al., 2021). Participants stressed the need for more comprehensive risk screening procedures addressing medical history and VR-specific susceptibilities.
Risk assessment
The summary of the IRB review process was followed by questions about the fitness of the type of review and the informed consent contents. Participants highlighted potential risks spanning physical, psychological, and procedural domains. Physical risks emerged as a primary concern in terms of potential neurological impacts, including brain plasticity changes, temporary paralysis, equipment-related neck strain, and the possibility of experiencing persistent ghost sensations following study participation. The groups showed concern regarding psychological risks, especially in VR, suggesting that such interfaces may present more complex psychological challenges compared to traditional research methods.
The risk analysis identified insufficiency in the medical screening protocol. Workshop participants emphasized the necessity of developing robust inclusion/exclusion criteria that account for neurological variables not typically considered in standard VR research. Most notably, participants classified the research as presenting greater than minimal risk and advised recategorization from behavioral to biomedical research given the direct brain stimulation.
Informed consent emerged as a critical area of discussion. Key recommendations focused on improving risk transparency, including providing more detailed explanations of potential physical and psychological impacts and clarifying data management protocols. Participants emphasized the need for clear information about data de-identification, ownership, and potential risk variations across different demographic groups.
Monitoring and mitigation strategies received particular scrutiny. Participants unanimously expressed reservations about the current safety monitoring approach, characterizing verbal monitoring as insufficient. They strongly recommended implementing real-time physiological monitoring techniques, such as continuous EEG and blood pressure monitoring, to ensure participant safety throughout the study.
Mitigation strategy
The groups discussed the comprehensiveness of the mitigation strategies used and whether the study was suitable for a computer science lab. The study protocol review process appeared lacking from the perspective of respondents. While generally supportive of the full board review, participants recommended enhanced oversight mechanisms. They advocated for a more specialized approach, suggesting the integration of a biomedical review board and the inclusion of expert consultations from diverse fields including psychiatry and engineering. The findings underscored the critical importance of interdisciplinary staff composition, advocating for expertise that extends well beyond traditional computer science domains. Workshop participants strongly recommended the inclusion of medical professionals, including physicians and psychological experts on the research team, to provide oversight and ensure immediate medical intervention capabilities when necessary.
Training emerged as a fundamental requirement for research personnel. Recommended protocols encompassed comprehensive first aid certification, in-depth device operation training, experiential learning from seasoned researchers, and a thorough understanding of potential participant risks. This multifaceted approach aimed to create a robust framework of preparedness and professional competence.
Safety infrastructure was identified as a paramount concern, with recommendations including comprehensive risk severity assessment procedures, detailed emergency response protocols, and ensuring appropriate medical equipment availability. The participants recommended establishing structured follow-up protocols, monitoring potential neurological impacts, conducting comprehensive assessments of unforeseen psychological effects, and implementing systematic documentation of participant experiences.
Communication and informed consent received significant attention, with proposals for enhanced strategies to address participant comprehension. Recommendations included providing clear explanations of technical terminology, developing comprehensive risk disclosure frameworks, and ensuring participants fully understand the potential study implications and associated risks.
Case study 3: motion tracking
Study description
This study investigated the interaction between virtual and real objects in a mixed-reality environment and its impact on cognitive load and task performance. The main objective was to assess how users manage real and virtual objects simultaneously while solving a virtual puzzle. A secondary question used the motion data from the puzzle interactions to test the accuracy of re-identification of participants, despite anonymization efforts. The additional data collected included assessment of presence, task load questionnaire, survey of user experience, and participant demographics. Despite anonymization efforts, machine learning models demonstrate a high probability of re-identifying individuals based solely on their motion data. The same group demonstrated that motion tracking data collected in VR environments can be used to identify individuals with over 95% accuracy using less than 5 min of tracking data. The high accuracy of re-identification was observed across different tasks, whether participants were engaged in simple viewing tasks or more interactive tasks involving hand controllers. The research was conducted in a Computer Science department in a healthy adult population. The informed consent articulated how the data would be used in the study and prompted itemized consent for specific uses and future use.
Open inquiry
The workshop respondents identified ethical challenges surrounding data privacy, and participant identification, and questioned the relationship between machine learning and research methods. Central to the discussion were uncertainties regarding data control and ownership, with participants acknowledging that the use of commercial devices may expose sensitive research participant data. The discussion suggested gaps in understanding the ultimate destination and management of collected data, particularly concerning the distinctions between cloud storage and local server configurations and the potential risks associated with investigator data access.
Fundamental methodological ambiguities emerged regarding the research purpose and operational framework. Participants noted significant information gaps, including undefined VR equipment specifications, unclear testing environment parameters, and limited transparency about data anonymization processes. Questions emerged about the relationship between data anonymization and machine learning applied to motion tracking.
Incidental findings through machine learning analysis presented another significant ethical dimension, raising questions about the scope of data utilization and participant consent. The potential for unexpected insights derived from aggregated or anonymized datasets created complex ethical considerations that challenge traditional consent frameworks.
Technical analysis
Given a summary of the data collected and the analysis plan, the groups were asked about the sensitivity of the raw data types and the inferences from AI. The technical analysis revealed complex privacy concerns arising from the study’s data collection methodologies, particularly focusing on the implications of spatial data tracking. The integration of multiple data types—including physiological indicators like breathing patterns, location-based information, and spatial tracking—significantly amplified potential privacy risks.
Device ownership and data management introduced additional layers of complexity. The Meta Quest 3 platform introduced substantial third-party data access concerns, with device manufacturers maintaining extensive control over collected data. Participants expressed concern regarding potential data monetization, unauthorized usage beyond research purposes, and potential impacts on personal insurability (based on incident findings).
Contextual factors played a role in privacy risk assessment. While the study utilized neutral tasks (3D puzzle), participants emphasized that the nature of the activity did not inherently mitigate privacy concerns. The application of machine learning to motion data could increase risk of re-identification and also potentially reveal unintended patterns and generate unexpected inferences (e.g., essential tremor). Of particular concern was the diagnostic potential of the collected data, which pointed to the need for clear protocols regarding incidental findings and comprehensive data management strategies.
Future implications of data analysis emerged as another area of uncertainty. Participants articulated the challenges of predicting long-term ramifications, particularly given the rapidly evolving capabilities of AI. The potential for unforeseen inferences and the difficulty of comprehensively describing future data usage scenarios underscored the need for adaptive, forward-thinking privacy protections.
Risk assessment
A summary of the review process, informed consent, and data management plan were used to answer questions on participant anonymity and choice in the informed consent process. The analyses revealed variances in stakeholder perspectives regarding participant identification, consent processes, and data protection strategies. Participant perspectives on participant confidentiality and consent demonstrated notable divergence. One group considered the current mechanisms sufficient for the scope of research, while acknowledging potential future risks. In contrast, another group explicitly characterized the de-identification process as inadequate and noted the difficulty in understanding the technical language in consent documentation.
Despite differences in opinion on how to protect participants, a consensus emerged regarding motion-tracking data sensitivity. All groups uniformly recognized that combining motion tracking with additional data types substantially increases re-identification risks. One group compared this to the same level of data exposure tolerated in consumer device usage. The analysis exposed significant variations in temporal risk considerations. One group examined data protection from a longitudinal perspective, exploring potential future implications. Other groups predominantly concentrated on immediate privacy concerns, reflecting different approaches to risk evaluation and mitigation.
Mitigation strategy
In addition to asking for recommendations on the implemented mitigation strategies, the groups were asked to discuss whether individual classification of participants from XR data qualified as human subjects research. A consensus emerged regarding the inadequacy of current confidentiality protections, with participants challenging the very terminology of “anonymization” as inappropriate for body data in XR. The discussion highlighted the need for more precise and transparent consent language that accurately reflects the sophisticated nature of data collection and potential identification risks.
Data management plan improvements focused on comprehensive security strategies, including implementation of rigorous risk assessments, strategic data obfuscation through noise introduction, and establishment of periodic data audits. Participants emphasized the importance of detailed procurement reviews and pre-study data ownership negotiations to create robust protection mechanisms. Groups point out that current anonymization techniques may become obsolete, which puts greater importance on ongoing assessment of data security and continuous refinement of protection strategies.
The classification of individuals via machine learning models presented an unsolved problem in human subjects’ protections. Participants presented divergent perspectives, reflecting the complex regulatory landscape. One group argued that the context of the research determined the sensitivity of the data, while another group advocated for consistent IRB oversight, including scenarios involving self-experimentation. Notably, FDA guidelines were cited as recognizing single-subject studies as human subjects research, which may serve as a model for studies that cannot adequately de-identify sensitive data.
Key recommendations included developing more sophisticated anonymization strategies, enhancing consent form accessibility, establishing clearer standards for research data protection, and implementing comprehensive long-term data security frameworks.
Comparison of risk categories
The inputs from the workshop respondents were mapped to the review criteria in the suggested guidance and ranked based on the frequency of mention and the severity of concern (Table 1). In the preparation of the case studies, the data privacy and user safety experts (Expert) also evaluated the review criteria. There were many areas of alignment between the research compliance officers (Learners) and Expert’s assessments (e.g., most elements of Informed Consent). Yet gaps in understanding the real risks associated with AI and XR were evident. In some areas, workshop participants overestimated safety risks (e.g., AI safeguards in Haptics), while in others overlooked mitigation strategies (e.g., Industry guidance in Biodata). The greatest discordance was in Data Management across all case studies.
Discussion
This report utilized the responses from group analysis of structured case studies by research compliance officers to assess the current understanding of human subjects’ protections for XR and AI research. The respondents varied significantly in their experience with these technologies and this was reflected in risk assessment for technical accuracy and awareness of mitigation strategies. The review of the case studies by this cohort identified areas of ambiguity on the scope of human subjects research, constructive critiques on the design of informed consent documents, and suggestions of mitigation strategies to prevent adverse events. The exchange of expertise between the researchers and the IRB officers produced a more comprehensive analysis of the latent risks of XR and AI research. This underlines the model of shared responsibility in human subjects research.
XR research, especially with the integration of AI in the applications and study data analyses, poses novel risks that are not captured by the standard social and behavioral IRB review processes. As XR is utilized more extensively in human-computer interaction research and behavioral studies, new guidance is needed to assess the risks associated with sensitive data and the complex data flow inherent to the technology. Modifications to the informed consent process that more effectively communicate the psychological impact of immersive experiences and itemized consent may also be implemented without major changes to existing processes. Accurate risk assessment and effective mitigation strategies are illustrated via the three case studies as a preliminary step to the deployment of XR human subjects’ protection resources to the research community.
Epistemic transformation and informed consent
“With new technologies come new types of experience” (Paul, 2021). In research settings, XR experiments explore unanswered questions, simulate impossible scenarios, and test new technologies. Thus participants, and to an extent researchers, cannot prospectively predict all aspects of the experience and the impact it will have on their person. Decision-making in the context of informed consent faces challenges as framed by L.A. Paul in Transformative Experiences, specifically the epistemic (knowledge) type (Paul, 2014). An epistemic transformation is one that can only be known through experience, in contrast to a personal transformation, which changes a point of view through experience. The key feature of the transformation is that it cannot be readily reversed/forgotten. VR is highlighted as a technology that can have a transformative impact through immersive user experiences that shift perspectives (e.g., first-person point of view of a different persona like another race or clinical diagnosis). Transformative experiences challenge rational choice because it is difficult to predict the outcome or how the person will feel about the outcome. In the haptics study, a participant had never been exposed to TMS prior and could not have known the perception of feeling a push on their hand from direct brain stimulation (no hands involved). So how could they make an informed choice to participate? Strategies to support decision-making rely on third-personal data, like watching someone else do it, or statistics on outcomes. The approach to informed consent may borrow from the framework for decision-making in transformative experiences, which centers on logic and morality of the person.
In the Biodata case study, the interactive first-person experience of a horror scene is distinct from viewing a horror movie. Even though a movie can promote association with the characters (empathy), it is consistent in the viewer’s mind that they are not the person in the movie. In a virtual experience, the activities are processed cognitively as though it is happening to the user and is described as embodiment (Slater et al., 2009). This principle makes for a powerful experimental and learning tool, yet needs to be tempered for content that is not traumatizing. In the Biodata study, careful calibration in a preliminary study determined the intensity of the user experience that would be sufficient to generate an effect in the psychophysiological measures. A recent deliberate study of the psychophysiological effects of VR stress induction demonstrated elevated heart and respiratory rates and subjective state anxiety; the brief stressors tended to increase blood cortisol levels as well (Fauveau et al., 2024). These studies serve as examples of contextualized risks and discomforts for participants and conservatively involved research supervision by physicians for participant safety.
Informed Consent in the face of potential transformative experiences should aim to provide information and ask questions of the participant on their perspectives of analogous situations (e.g., previous reactions in a haunted house). Second, unambiguous and immediate withdrawal procedures with defined responses to adverse effects should be in the protocols. For example, in the stress induction study, two research staff were always present to assist with the removal of the headset and attend to the safety measures (Fauveau et al., 2024).
Standard guidance and shared responsibility
While emerging technologies, like XR and AI, run the inherent risk of unanticipated adverse events and future threats to confidentiality via AI applied to collected datasets, guidance for all stakeholders may still be built upon a robust conceptual framework. The key components of consent to participate and give data to a research study can be summarized by Context, Control, and Choice (XRSI, 2020). This framework has been applied to data privacy rights in XR and AI applications, yet it may be adapted to research participant rights (e.g., California Health and Safety Code §24,172). The 3 C’s are provided by stakeholders in a distributed manner. The researcher has the responsibility to know their technology systems completely and provide the necessary information to research compliance offices with the depth, transparency and format requested. The review committee ensures the protocol abides by human subjects protections policies (US Federal rule 45 CFR 46) and organizational policies. The informed consent provided to participants must be easy to understand, complete and embed the 3 C’s.
Each of these studies presented in this report can show how the 3 C’s are adapted to research. The Motion Tracking study consent document illustrated Control by allowing for itemized consent for types of data use and data storage and future use. The Biodata study provided easy to understand descriptions of the participant experience in the format of first-person questions in the consent form (e.g., What will happen to me if I take part?), which illustrates Context. The Haptics study required a direct and unambiguous withdrawal procedure for safety to ensure participant Choice. Similar issues in informed consent are addressed in the use of neurotechnology in medical and consumer applications. In 2024, the United Nations Human Rights Council report on the topic stated risks to privacy, personal integrity, and freedom of thought (Human Rights Council, 2024). XR alone, but even more in combination with AI and neurotechnology (like TMS used in Haptics study), pose emergent risks to human subjects’ protections and universal human rights (Chander et al., 2023).
Guidance for review committees and researchers will support the protection of participant rights. One strategy that will help review committees is to adopt a risk-based approach to evaluate study protocols based on the type of data collected, the context related to the entire data lifecycle process along with potential harms in the experimental design, analogous to industry cybersecurity risk assessments (Cybersecurity and Infrastructure Security Agency, 2018). Critically, the structure of the guidance prioritizes probing the investigator to provide key information about data management, participant safety assurances, and clear and comprehensive informed consent. The guide considers the data types in combination with each other and in the context of the experiment. For example, a study that is collecting biometric EEG data alongside a VR experiment about interactions with a virtual avatar would create the risk of re-identifying the participant based on their unique movements, voice, and brain activity using a multimodal machine learning classifier. Protection from a multifaceted risk like this may be resolved by storing raw data sources in separately secured drives with “role-based access privileges” and only converging derived metrics for statistical analysis to test the research question.
Regardless of delineated risk, context is key. Context could include biases and norms around security and privacy practices within an institution and organization, legal mandates for privacy protections, and inferences made from the research data throughout the data lifecycle. This is where studying a variety of approaches taken for conducting research in various settings and use cases will inform the development of standard guidance to establish a risk-based approach to evaluate research proposals and methodologies.
XR and AI complexities
An important anticipatory aspect of the proposed guidance involves simulating potential risks associated with datasets under current and emerging AI capabilities. This includes inferences derived from direct measurements, statistical models, machine learning, and AI applications, which are highly dependent on the type, quality, and quantity of data collected. Leveraging expertise from collaborative groups, such as NIST’s Artificial Intelligence Safety Institute Consortium (AISIC) and OWASP AI (owaspai.org), guidance on AI-associated vulnerabilities in research settings may be extracted. OWASP AI’s comprehensive framework for AI security offers critical insights into threat modeling and governance across the AI lifecycle. For example, the identification of development-time threats, such as data poisoning and intellectual property theft, can directly inform protocols for securing datasets and AI models used in immersive technologies. Additionally, their emphasis on runtime application security provides actionable strategies to safeguard operational systems. AISIC’s focus on privacy-preserving machine learning and the creation of testing environments complements OWASP’s guidance by addressing broader governance and technical controls.
In the context of the motion-tracking data that can uniquely identify research participants, the threat to confidentiality only appears when combined with deep learning models (Miller et al., 2020; Nair et al., 2023b). Motion data inherently contains unique behavioral patterns that, when analyzed, can be linked back to individual users. Despite anonymization efforts, machine learning models have demonstrated a high probability of re-identifying users based on these motion patterns. This risk is amplified by the potential for inference, where user behavior could reveal sensitive details about cognitive states or physical characteristics. These inferences with metadata about the participant (e.g., sex, location, diagnosis) may expose the actual identity of the participant, who expects anonymity. In the absence of AI, the data is benign. This suggests that all motion-tracking data should be protected at the level of sensitive data considering the future life cycle of potential analyses and the risk of data breaches and misuse.
Limitations
This study has several limitations due to the nature of the data collection as an informal study. Anonymous surveys were used in the workshops, so there is no documentation on the characteristics of the cohort aside from knowing they were IRB members completing the workshop as continuing education or personal interest. Conference attendees are necessarily coming from institutions that are sufficiently resourced to send research compliance staff to in person training with additional costs. Thus we are not sampling from smaller organizations that are not research intensive. The attendee selection did not require previous experience with the technologies discussed and no active recruitment was done. A consequence of this was a cohort with a wide range of preparedness for the topic. Our results on the surveys and analysis may have yielded different results if this were done with research compliance groups with a concentration in computer science and engineering versus biomedical research. The generalization of the attitudes and knowledges expressed in this cohort should be limited to a generalist IRB from research-focused organizations.
Another limitation of the study was that the concentrated format did not have capacity for participants to do a comprehensive risk assessment. Only select information was provided for participants to review and only risks associated with that information was considered in the questions. Given more time or resources, they would likely report a more comprehensive and accurate risk assessment. The findings here should be interpreted as an initial review that would typically be followed by a request for more information in the IRB review process. In future case studies, complete IRB submission records would be desired to better simulate the experience of risk assessment in human subjects protections. The study protocols were not provided in their fullness for these case studies. The authors relied on lab procedures and protocols, informed consent documents, manuscripts, and interviews with lead researchers.
Future directions
The authors recommend the following steps be taken to improve human subjects’ protections in XR and AI research. First, technology educational resources should be made available to IRB officers that are conceptual and brief to introduce relevant technical knowledge. Industry and professional associations often develop these resources (e.g., IEEE). Second, a guidance document to review XR protocols that are framed as a checklist and inquiries to the submitting investigator will reduce the burden on IRB committees and expedite the process. The draft tested has been further developed and published as a free online resource (See Data availability statement). Third, a domain-agnostic research code of conduct handbook for XR should be provided for all labs using immersive technologies. Achieving this would require a consortium of researchers to deliberate and put forth guidelines, such has been done for VR clinical trials (Birckhead et al., 2019). Collectively, the comprehensive training in awareness of overt and covert risks and best practices for mitigation strategies will promote safety for participants and secure data protections over the full life cycle of studies.
As an extension of this work, our team is developing a research code of conduct in collaboration with the Markkula Center. Future implementation steps include (1) drafting domain-specific guidelines for immersive research; (2) piloting the code of conduct with partner institutions; and (3) iterating based on feedback from IRB professionals and XR developers. These steps will help operationalize the 4-C principles and align them with both research and industry workflows.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.scu.edu/ethics/focus-areas/bioethics/resources/immersive-technologies-and-human-subjects-protections/.
Ethics statement
The studies involving humans were approved by Santa Clara University Institutional Review Board. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements.
Author contributions
JS: Conceptualization, Writing – review and editing, Supervision, Investigation, Software, Methodology, Resources, Project administration, Visualization, Writing – original draft, Validation, Formal Analysis. AB: Methodology, Writing – review and editing, Formal Analysis, Software, Data curation. BC: Writing – original draft, Project administration, Resources.
Funding
The author(s) declared that financial support was received for this work and/or its publication. The work was funded by The Markulla Center for Applied Ethics' Hackworth Faculty Award and PRIM&R honorariums for Julia Scott and Banujeet Choudhary.
Acknowledgements
We thank the participants in the workshop for their interest in this topic and feedback on the case studies and guidelines. We are grateful to the labs that shared their research studies for the development of the case studies. We appreciate the hosting of the workshop resources for broader reach by the Markkula Center for Applied Ethics, Santa Clara University. The completion of the final manuscript was supported by Kavya Pearlman, CEO XRSI.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author JS declared that they were an editorial board member of Frontiers at the time of submission. This had no impact on the peer review process and the final decision.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2026.1674326/full#supplementary-material
References
Agrawal, V., Hazratifard, M., Elmiligi, H., and Gebali, F. (2023). Electrocardiogram (ECG)-based user authentication using deep learning algorithms. Diagnostics 13 (3), 439. doi:10.3390/diagnostics13030439
Aucouturier, E., and Grinbaum, A. (2023). Recommendations to address ethical challenges from research in new technologies. Report No.: D2.2. European Union: iRECS. Available online at: https://irp.cdn-website.com/5f961f00/files/uploaded/Deliverable_2.2.pdf.
Birckhead, B., Khalil, C., Liu, X., Conovitz, S., Rizzo, A., Danovitch, I., et al. (2019). Recommendations for methodology of virtual reality clinical trials in health care by an international working group: iterative study. JMIR Ment. Health 6 (1), e11973. doi:10.2196/11973
Chamola, V., Bansal, G., Das, K., Hassija, V., Siva, N., Reddy, S., et al. (2023). Beyond reality: the pivotal role of generative AI in the metaverse.
Chander, D., Scott, J., Pearlman, K., Cartagena, O., and Choudhary, B. (2023). “Human rights and neurotechnology impact, opportunities and measures,” in XRSI.
Cybersecurity and Infrastructure Security Agency (2018). “Framework for improving critical infrastructure cybersecurity,”. National Institute of Standards and Technology. Version 1.1 NIST CSWP 04162018. doi:10.6028/NIST.CSWP.04162018
Daniel-Watanabe, L., Cook, B., Leung, G., Krstulović, M., Finnemann, J., Woolley, T., et al. (2025). Using a virtual reality game to train biofeedback-based regulation under stress conditions. Psychophysiology 62 (1), e14705. doi:10.1111/psyp.14705
Diemer, J., Alpers, G. W., Peperkorn, H. M., Shiban, Y., and Mã¼hlberger, A. (2015). The impact of perception and presence on emotional reactions: a review of research in virtual reality. Front. Psychol. 6 (Generic), 26. doi:10.3389/fpsyg.2015.00026
Digital Regulation and Cooperation Forum (2022). The metaverse and immersive technologies—A regulatory perspective. Available online at: https://www.drcf.org.uk/publications/blogs/the-metaverse-and-immersive-technologies-a-regulatory-perspective.
European Metaverse Research Network (2023). European metaverse research network. Available online at: https://metaversechair.ua.es/research-network/.202420
Fauveau, V., Filimonov, A. K., Pyzik, R., Murrough, J., Keefer, L., Liran, O., et al. (2024). Comprehensive assessment of physiological and psychological responses to virtual reality experiences. J. Med. Ext. Real. 1 (1), 227–241. doi:10.1089/jmxr.2024.0020
Hamad, A., and Jia, B. (2022). How virtual reality technology has changed our lives: an overview of the current and potential applications and limitations. Int. J. Environ. Res. Public Health 19 (18), 11278. doi:10.3390/ijerph191811278
Human Rights Council (2024). Impact, opportunities and challenges of neurotechnology with regard to the promotion and protection of all human rights [Report of the Human Rights Council Advisory Committee]. United Nations.
Internal Market and Consumer Protection (2024). Virtual worlds: opportunities, risks and policy implications for the single market. Generic. Available online at: https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2022/2198(INI)&l=en.
McGill, M. (2021). The IEEE global initiative on ethics of extended reality (XR) Report--Extended reality (XR) and the erosion of anonymity and privacy. Ext. Real. (XR) Eros. Anonymity Priv. - White Pap., 1–24.
Meta (2022). Supporting independent metaverse research across Europe. Facebook. Available online at: https://about.fb.com/news/2022/12/supporting-independent-metaverse-research-across-europe/.
Miller, M. R., Herrera, F., Jun, H., Landay, J. A., and Bailenson, J. N. (2020). Personal identifiability of user tracking data during observation of 360-degree VR video. Sci. Rep. 10 (1), 17404. doi:10.1038/s41598-020-74486-y
Nair, V., Guo, W., Mattern, J., Wang, R., O textquoterightBrien, J. F., Rosenberg, L., et al. (2023a). Unique identification of 50,000+ virtual reality users from head and hand motion data. Available online at: https://www.usenix.org/conference/usenixsecurity23/presentation/nair-identification.
Nair, V., Rosenberg, L., O’Brien, J. F., and Song, D. (2023b). Truth in motion: the unprecedented risks and opportunities of extended reality motion data. IEEE Secur. Priv., 2–10. doi:10.1109/MSEC.2023.3330392
Paul, L. A. (2014). Transformative experience. Oxford University Press. Available online at: https://books.google.com/books?id=zIXjBAAAQBAJ.
Paul, L. A. (2021). “Decision making in times of transformative change,” in The international symposium social singularity in the 21st century: at the crossroads of history, Prague, the Czech Republic.
Ramirez, E. J., and LaBarge, S. (2020). Ethical issues with simulating the bridge problem in VR. Sci. Eng. Ethics 26 (6), 3313–3331. doi:10.1007/s11948-020-00267-5
Robotics, C., and Lv, Z. (2023). Generative artificial intelligence in the metaverse era. Cogn. Robot. 3 (Generic), 208. doi:10.1016/j.cogr.2023.06.001
Rossi, S., Antal, A., Bestmann, S., Bikson, M., Brewer, C., Brockmöller, J., et al. (2021). Safety and recommendations for TMS use in healthy subjects and patient populations, with updates on training, ethical and regulatory issues: expert guidelines. Clin. Neurophysiol. 132 (1), 269–306. doi:10.1016/j.clinph.2020.10.003
Rubo, M., Messerli, N., and Munsch, S. (2021). The human source memory system struggles to distinguish virtual reality and reality. Comput. Hum. Behav. Rep. 4 (Generic), 100111. doi:10.1016/j.chbr.2021.100111
Rueda, J. (2022). Hit by the virtual trolley: when is experimental ethics unethical? Teorema Int. J. Philosophy 41 (1), 7–27. Available online at: www.jstor.org/stable/27118201.
Slater, M., Lotto, B., Arnold, M. M., and Sanchez-Vives, M. V. (2009). How we experience immersive virtual environments: the concept of presence and its measurement. Anu. Psicol. 40 (2), 193–210.
Sony (2024). Sony research award program. Available online at: www.sony.com/en/SonyInfo/research-award-program/.
Tanaka, Y., Serfaty, J., and Lopes, P. (2024). “Haptic source-effector: Full-body haptics via non-invasive brain stimulation,” in Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–15. doi:10.1145/3613904.3642483
Wang, M., Hu, J., and Abbass, H. A. (2020). BrainPrint: EEG biometric identification based on analyzing brain connectivity graphs. Pattern Recognit. 105, 107381. doi:10.1016/j.patcog.2020.107381
Williams, M., and Moser, T. (2019). The art of coding and thematic exploration in qualitative research. Int. Manag. Rev. 15, 45. Available online at: https://api.semanticscholar.org/CorpusID:198662452.
Wu, S., Ramdas, A., and Wehbe, L. (2022). Brainprints: identifying individuals from magnetoencephalograms. Commun. Biol. 5 (1), 852. doi:10.1038/s42003-022-03727-9
XR in Research: A Case Study (2022). The embassy of good science. Available online at: https://embassy.science/wiki/Instruction:60bd0d8d-032c-4931-9b64-b987b64d66bb.
Keywords: artificial intelligence, biosensors, case study, human subject protections, immersive technologies, privacy, research ethics, virtual reality
Citation: Scott JA, Bagade A and Choudhary B (2026) Immersive technologies and AI generate novel challenges to human subjects’ protections protocols. Front. Virtual Real. 7:1674326. doi: 10.3389/frvir.2026.1674326
Received: 27 July 2025; Accepted: 12 January 2026;
Published: 30 January 2026.
Edited by:
Jonathan Giron, Reichman University, IsraelReviewed by:
Timothy John Pattiasina, Institut Informatika Indonesia, IndonesiaBecky Spittle, Birmingham City University, United Kingdom
Copyright © 2026 Scott, Bagade and Choudhary. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Julia A. Scott, anNjb3R0MUBzY3UuZWR1
Aryan Bagade3