Skip to main content

SYSTEMATIC REVIEW article

Front. Psychol., 29 August 2023
Sec. Neuropsychology
This article is part of the Research Topic Virtual, Mixed and Augmented Reality in Cognitive Neuroscience and Neuropsychology - Volume II View all 6 articles

Ecologically valid virtual reality-based technologies for assessment and rehabilitation of acquired brain injury: a systematic review

  • 1Faculdade de Artes e Humanidades, Universidade da Madeira, Funchal, Portugal
  • 2NOVA Laboratory for Computer Science and Informatics, Lisbon, Portugal
  • 3Agência Regional para o Desenvolvimento da Investigação, Tecnologia e Inovação, Funchal, Portugal
  • 4Neurorehabilitation and Brain Research Group, Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain
  • 5NEURORHB, Servicio de Neurorrehabilitación de Hospitales Vithas, Valencia, Spain
  • 6Faculdade de Ciências Exatas e da Engenharia, Universidade da Madeira, Funchal, Portugal

Purpose: A systematic review was conducted to examine the state of the literature regarding using ecologically valid virtual environments and related technologies to assess and rehabilitate people with Acquired Brain Injury (ABI).

Materials and methods: A literature search was performed following the PRISMA guidelines using PubMed, Web of Science, ACM and IEEE databases. The focus was on assessment and intervention studies using ecologically valid virtual environments (VE). All studies were included if they involved individuals with ABI and simulated environments of the real world or Activities of Daily Living (ADL).

Results: Seventy out of 363 studies were included in this review and grouped and analyzed according to the nature of its simulation, prefacing a total of 12 kitchens, 11 supermarkets, 10 shopping malls, 16 streets, 11 cities, and 10 other everyday life scenarios. These VE were mostly presented on computer screens, HMD’s and laptops and patients interacted with them primarily via mouse, keyboard, and joystick. Twenty-five out of 70 studies had a non-experimental design.

Conclusion: Evidence about the clinical impact of ecologically valid VE is still modest, and further research with more extensive samples is needed. It is important to standardize neuropsychological and motor outcome measures to strengthen conclusions between studies.

Systematic review registration: identifier CRD42022301560, https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=301560.

1. Introduction

Given the high prevalence of cognitive impairment, functional dependence and social isolation after acquired brain injury (ABI), namely traumatic brain injury (TBI) (Robba and Citerio, 2023) and stroke (Tsao et al., 2023), finding effective motor and cognitive assessment and rehabilitation solutions has been a primary goal for many research studies in the field of health technologies (Bernhardt et al., 2019). Performance of many daily activities, such as doing the groceries, implies getting to outdoor locations, such as supermarkets or shopping malls. Street crossing and driving are demanding tasks that require multiple and complex cognitive skills that are commonly impaired after ABI (Parsons, 2015). Although the goal of rehabilitation is to improve individuals’ independence in these activities, their practice in real environments can be dangerous because of intrinsic hazards such as traffic or pedestrians, and are extremely resource-intensive in terms of staff management and financial costs, which are scarce in most clinics (Bohil et al., 2011). These limitations have motivated the use of Virtual Reality (VR) to safely recreate different scenarios in the clinic (Rizzo et al., 2004).

Most ABI rehabilitation approaches rely on theoretically valid principles, however the exercises and activities, such as physiotherapy and occupational therapy, repetitive motor exercises, and paper-and-pencil cognitive exercises with static stimuli, are demotivating and lack ecological validity (Parsons, 2016). The issue of ecological validity started being discussed already in 1982 when Neisser argued that cognitive psychology experiments were conducted in artificial settings and employed measures with no counterparts in everyday life (Neisser, 1982). In opposition, Banaji and Crowder (1989) advocated that ecological approaches lack the internal validity and experimental control needed for scientific progress (Banaji and Crowder, 1989). In 1996, Franzen and Wilhelm conceptualized ecological validity as having two aspects; veridicality, in which the person’s performance on a construct-driven measure should predict some feature(s) of the person’s everyday life functioning, and verisimilitude, in which the requirements of a neuropsychological measure and the testing conditions should resemble requirements found in a person’s ADL’s (Franzen and Wilhelm, 1996). Since then, the search for a balance between everyday activities and laboratory control has a long history in clinical neuroscience (Parsons, 2015) and researchers have been using different definitions and interpretations of the term ecological validity (Holleman et al., 2020).

A promising approach to improve neuropsychological assessment and rehabilitation ecological validity is the use of immersive and non-immersive VR systems (Rizzo et al., 2004; Parsons, 2015; Kourtesis and MacPherson, 2023). VR combines the control and rigor of laboratory measures with a simulation that depicts real life situations in a balance between naturalistic observation and the need for control key variables (Bohil et al., 2011). Over the last years, VR-based methodologies have been developed to assess and improve cognitive (Aminov et al., 2018; Luca et al., 2018; Maggio et al., 2019) and motor (Laver et al., 2015, 2017) functions, via immersive (e.g., Head Mounted Displays (HMDs), Cave Automatic Virtual Environment (CAVEs)) and non-immersive (2D computer screens, tablets, mobile phones) technologies. Non-immersive VR requires the use of controllers, joysticks and keyboards, which can be challenging for individuals with no gaming or pc-using experience, namely older adults and clinical populations (Werner and Korczyn, 2012; Zygouris and Tsolaki, 2015; Parsons et al., 2018; Laver et al., 2017; Zaidi et al., 2018). On the other hand, immersive VR using naturalistic interactions seems to facilitate comparable performance between gamers and non-gamers (Zaidi et al., 2018).

Some critical elements, such as presence, time perception, plausibility and embodiment, collectively contribute to the ecological validity of VR-based assessment and rehabilitation programs. Presence comprehends two illusions, usually referred to as Place Illusion (illusion of being in the place depicted by the VE) and Plausibility (illusion that the virtual events are really happening). Embodiment refers to the feeling of “owning” an avatar or virtual body. This aspect is particularly significant for patients with motor impairments. An embodied experience enhances motor learning and fosters a stronger mind–body connection during rehabilitation sessions (Vourvopoulos et al., 2022). Presence together with the embodiment are the key illusions of VR (Slater et al., 2022). Another important element is that time perception in VR differs from the physical world, leading to potential alterations in the patient’s temporal experience. Understanding how time is perceived in VR is crucial for designing effective assessment and rehabilitation protocols and managing patient expectations (Mullen and Davidenko, 2021). The integration of these key elements potentially enhances patient motivation and engagement, which might result in better adherence and improved outcomes.

According to Slater (1999) if a VR system allows the individual to turn their head in any direction at all and still receive visual information only from within the VE then it is a more immersive system than one where the individual can only see the VE visual signals along one fixed direction (Slater, 1999). Accordingly, while in a non-immersive VR system the VE is displayed on a computer monitor and the interaction is limited to a mouse, joystick or keyboard, in an immersive VR system (typically HMDs and CAVEs) the user is surrounded by a 3D representation of the real world and can use their own body for a naturalistic interaction. This strong feeling of ‘being physically present’ in the VE allows one to respond in a realistic way to the virtual stimuli (Agrewal et al., 2020), eliciting the activation of brain mechanisms that underlies sensorimotor integration and cerebral networks that regulate attention (Vecchiato et al., 2015). Notwithstanding the technical and theoretical differences between immersive and non-immersive VR, both have pros and cons concerning the use of novel assessment and rehabilitation systems to improve and personalize treatments according to the patient’s needs.

One of the explanations for the growing interest in VR is its potential to incorporate motor and cognitive tasks within the simulation of ADL’s, and to provide safe and controlled environments for patients to rehabilitate ADL’s. As part of their limitations, VR technologies may cause cybersickness symptomatology such as nausea, dizziness, disorientation, and postural instability. However, recent reviews and meta-analyses suggest that the symptoms are experienced due to the inappropriate implementation of immersive VR hardware and software (Kourtesis et al., 2019; Saredakis et al., 2020). Another problem is that researchers and clinicians do not quantitatively assess cybersickness and it can affect cognitive performance (Nesbitt et al., 2017; Arafat et al., 2018; Mittelstaedt et al., 2019). Since ABI patients may be more susceptible to experience these symptoms (Spreij et al., 2020b), there should be extra caution in the development of VR based assessment and intervention tools. Collaboration between clinicians, researchers, and technology developers is essential to produce VR based tools that can address the assessment and treatment need of the ABI patients (Zanier et al., 2018).

In the last years a number of reviews analyzed the use of ecologically-valid environments in cognitive and motor assessment and/or rehabilitation of multiple sclerosis (Weber et al., 2019), addictive disorders (Segawa et al., 2020), hearing problems (Keidser et al., 2020) mental health (Bell et al., 2022), language (Peeters, 2019) and neglect (Azouvi, 2017). In the last years some reviews have provided overviews within this field. Romero-Ayuso et al. (2021) conducted a review to determine the available tests for the assessment of executive functions with ecological validity to predict individuals’ functioning (Romero-Ayuso et al., 2021). The authors analyzed 76 studies and identified 110 tools to assess instrumental activities of daily living, namely menu preparation and shopping. Since they have focused in the executive functions assessment, they found a predominance of tests based on the Multiple Errands Test paradigm (Romero-Ayuso et al., 2021). Corti et al. (2022) have performed a review about the VR assessment tools for ABI, described in scientific publications between 2010 and 2019. Through the analysis of 38 studies they have identified 16 different tools that assessed executive functions and prospective memory and other 15 that assessed visuo-spatial abilities. The authors found that about half of these tools delivered tasks that differ from everyday life activities, limiting the generalization of the assessment to real world performance. Although the authors recognize great potential of VR for ABI assessment, they recommend the improvement of existing tools or development of new ones with more ecological validity (Corti et al., 2022).

To the best of our knowledge, no review has analyzed the characteristics and clinical validation of ecologically valid daily life scenarios developed to both assess and/or rehabilitate acquired brain injury patients concerning the cognitive and motor domains together. As such, this review aims to examine the state of the literature regarding the use of ecologically valid virtual environments to assess and rehabilitate people with ABI. This review will focus on (1) what are the most common virtual environments used in acquired brain injury assessment and rehabilitation, (2) which technologies are used for presentation and interaction in these environments, (3) how are these virtual environments being clinically validated about their impact in ABI assessment and rehabilitation.

2. Methods

A systematic search of the existing literature was performed following the PRISMA guidelines using four digital databases: PubMed, Web of Science, IEEE and ACM. The search focused on assessment and intervention studies published in English, from 2000 to 2021, in peer-reviewed journals and conferences. The search targeted titles and abstracts using the following keywords and boolean logic: ‘virtual reality’ OR ‘virtual environment’ OR ‘immersive’ AND ‘stroke’ OR ‘traumatic brain injury’ OR ‘acquired brain injury’ AND ‘rehabilitation’ OR ‘assessment’ AND ‘simulated environments’ OR ‘activities daily living’ NOT ‘motor’ OR ‘mobility’ OR ‘limb’ OR ‘balance’ OR ‘gait’.

All types of articles (not reviews and editorials) were included if: (1) they involved individuals of any age with stroke or TBI; (2) simulated environments of the real world or ADL and; (3) had at least one outcome measure result related to the intervention clinical effects. All simulated environments were considered, including 360° videos and serious games. We have included non-immersive (e.g., computer screen and tablet), semi-immersive (e.g., wall projections and driving simulators), and fully immersive systems (e.g., HMD). Additionally, there were no limitations regarding the assessment or intervention administration, frequency, duration, or intensity or sessions.

Articles were excluded if they did not provide outcome data from objective clinical measures (such as cognitive and motor assessment instruments, questionnaires, interviews), were not peer-reviewed or were systematic reviews or meta-analyses. In addition, articles known by the authors from fulfilling the search criteria but not accessible through the search above were also added to the review. These articles were all published in the International Conference on Disability, Virtual Reality & Associated Technologies, which does not have its proceedings indexed.

Data from the included articles were extracted by two of the authors (ALF and JL). Inclusion of the articles was discussed and reviewed in two meetings (one in the screening and one in the eligibility phase) with the remaining senior authors. The general characteristics and results of the studies were extracted, namely the author’s name and year of publication, study design, type of participants, targeted domains, type of interaction and display, type of assessment/rehabilitation VR task, outcome measures and main conclusions. These general characteristics are displayed in Tables 16.

TABLE 1
www.frontiersin.org

Table 1. Virtual reality-based technologies in a kitchen environment for assessment and rehabilitation of acquired brain injury.

TABLE 2
www.frontiersin.org

Table 2. Virtual reality-based technologies in a supermarket environment for assessment and rehabilitation of acquired brain injury.

TABLE 3
www.frontiersin.org

Table 3. Virtual reality-based technologies in a shopping mall environment for assessment and rehabilitation.

TABLE 4
www.frontiersin.org

Table 4. Virtual reality-based technologies in a street environment for assessment and rehabilitation.

TABLE 5
www.frontiersin.org

Table 5. Virtual reality-based technologies in a city environment for assessment and rehabilitation of acquired brain injury.

TABLE 6
www.frontiersin.org

Table 6. Virtual reality-based technologies in other everyday life environments for assessment and rehabilitation of acquired brain injury.

3. Results

The results of the different phases of the systematic review are depicted in the PRISMA flow diagram (Figure 1). A total of 551 papers were identified through database searching, 363 after duplicates removal. In the first screening based on titles and abstracts, 195 were removed mainly due to the type of study (review articles, theoretical articles, studies describing tools with no clinical validation). In total, 168 full-text articles were assessed for eligibility, being that 98 were excluded as they did not involve VR, did not include ABI participants, did not describe rehabilitation or assessment studies, or did not comprehend simulated environments or ADL. Accordingly, 70 articles were included for analysis in this systematic review. From the total of 70 studies, 45 had an experimental design and 25 a non-experimental design. Due to the reduced number of participants per trial and heterogeneity of outcome measures – 136 different cognitive, functional, motor, emotional, cyber sickness, immersion and engagement assessment instruments and questionnaires - it was not possible to perform a meta-analysis.

FIGURE 1
www.frontiersin.org

Figure 1. PRISMA flow chart.

3.1. Kitchens

Simulation of kitchens is attractive for rehabilitation as it involves manipulation of objects, planning, execution of ADLs, and using skills that are commonly affected after ABI (Poncet et al., 2017) (Table 1). Twelve studies using kitchens were identified (six prospective cross-sectional studies, four performed case studies and two exploratory studies), which enrolled 132 stroke participants, 107 TBI participants, one participant with meningoencephalitis, and 54 healthy controls (HC) in activities performed in simulated virtual kitchens. Virtual kitchens were used to assess or rehabilitate executive functions (n = 4), attention (n = 4), upper limb function (n = 5), and engagement (n = 1), while performing different tasks such as preparing meals or hot drinks.

Most of these studies have focused on cognitive aspects involved in the performance of ADLs. For example, Zhang et al. (2001) used a kitchen environment with 30 participants with TBI to evaluate their ability to process and sequence information in comparison to healthy controls (HC). Individuals with TBI showed a decreased ability to process information, identify logical sequencing, and complete the overall task in comparison to HC (Zhang et al., 2001). A more complex meal preparation task that consisted on 81 steps was used in a later study of the same group. The scoring method and experimental setup were replicated but, in this case, a TV screen provided the visual feedback. The authors compared the performance of 54 participants with TBI in the virtual world and in a real physical kitchen and repeated the same procedure 3 weeks later. Correlation between scores in the virtual and in the real kitchen was moderate for both the first and second trial (Zhang et al., 2003).

Hilton et al., 2002 developed a tangible user interface (TUI) for a hot drink preparation task and analyzed the time to task completion and errors of seven stroke participants. The location of objects, the instructions given, the physical constraints, the ineffective user response, and the visual and auditory feedback were pointed as important aspects when recreating kitchens in VR. Moreover feedback was given for the TUI redesign (Hilton et al., 2002). Subsequently, Edmans et al. (2006) compared the performance of 50 individuals with stroke at preparing a hot drink task, consisting of 12–27 steps in a virtual and in a physical kitchen. The VE was displayed and interacted with a touch screen, and the performance was equally assessed as the number of steps that were completed. Weak correlations were found in the scores between conditions (r = 0.30, p < 0.005). Later on, these authors evaluated the effectiveness of this VE presented in a touch-sensitive screen in a sample of 13 single case studies with stroke patients. Visual inspection of scores across all cases showed a trend toward improvement over time in both real and virtual hot drink making ability in both control and intervention phases. There was no significant difference in the improvements in real and virtual hot drink making ability during all control and intervention phases in the 13 cases. The authors concluded that more development and studies on their system were required (Edmans et al., 2009). The hot drink preparation task in a virtual kitchen was also used by Besnard et al. (2016). The performance of 19 HC and 19 TBI was compared in both virtual and physical kitchen. Significant moderate correlations were found between the number of total errors in both environments for HC (r = 0.65, p = 0.001) and TBI (r = 0.44, p = 0.02). Measures of four virtual tasks significantly correlated with both executive and intelligence measures. Intelligence quotient correlated significantly with time to completion (r = 0.48, p = 0.03), total errors (r = 0.48, p = 0.03), and commission errors (r = 0.48, p = 0.03). The profile scores of the Zoo Map Test (Wilson et al., 1996) correlated significantly with total errors (r = 0.54, p = 0.01) and commission errors (r = 0.47, p = 0.03). In addition, a significant correlation was observed between the Modified Six Elements Test profile (MSET) score (Wilson et al., 1996) with the total number of errors (r = 0.60, p = 0.005) and omissions (r = 0.48, p = 0.03) (Besnard et al., 2016). Cao et al. (2010) also compared the hot drink preparation task in a physical kitchen and in a virtual replica, which was shown using a computer screen and interacted with a keyboard and a mouse. A study involving this VE and 13 HC and seven individuals post-stroke showed that participants with stroke needed nearly twice the time as HC, and made more errors in the VE than in the physical kitchen (Cao et al., 2010).

Virtual kitchens have also been used to assess and improve motor performance. Adams et al. (2015) simulated a meal preparation task consisting of 17 steps (Adams et al., 2015). The VE was shown in a TV screen and interacted using arm movements detected with the Xbox Kinect sensor. The time that 14 participants with stroke spent to complete the virtual task was moderately correlated (r = 0.56, p = 0.036) with their time score in the Wolf Motor Function Test (WMFT) (Wolf et al., 2001). Huang et al. (2017) used an Amadeo (Tyromotion GmbH, Graz, Austria), a hand rehabilitation robotic device, and an HMD, for upper-limb rehabilitation. Eight stroke participants were enrolled in a prospective cohort study for 18 30-min training sessions over 6 weeks. Each session consisted of passive (10 min), assist-as-needed (10 min), and two-dimensional and three-dimensional VR task-oriented exercises (5 min each) displayed in the HMD. Among the VR-based exercises, participants were required to complete certain ADLs on a simulated kitchen (i.e., open the oven, set the alarm clock) with the paretic hand. All the participants were assessed prior to and at the end of the intervention with the Fugl-Meyer Assessment (FMA) (Fugl-Meyer et al., 1975), the Motor Assessment Scale (MAS) (Malouin et al., 1994), and, in addition, with the active range of motion and force intensity of fingers collected by the robotic device. Results showed an improvement in motor skills of all eight subjects, which was evidenced by an increase of 2.32 (+37.5%) in the FMA, 1.16 (+38.8%) in the MAS, and an increase of 3.36 (+42.8%) and 11.21 (+33.3%) in the average mean force during extension and flexion, respectively. All participants but one also showed noticeable improvement in range of motion (Huang et al., 2017).

Triandafilou et al. (2018) have developed the Virtual Environment for Rehabilitative Gaming Exercises (VERGE) system for home motor therapy purposes. Within this 3D multi-user VE, stroke patients could interact with therapists and/or other stroke patients. Each user’s own movement controls an avatar through kinematic measurements made with a Kinect device. The system (laptop, Xbox Kinect Sensor, and a Wireless Optical Pen Mouse) was designed to train important movements for rehabilitation and to provide real-time feedback of performance. The VERGE system includes 3 tasks: (1) Ball Bump, where the goal is to hit a ball back and forth across the table, while avoiding objects on a table; (2) Food Fight, where the user “grasps” an object by placing the avatar’s hand in close proximity and clicking a button on a wireless optical pen mouse with either hand; and (3) Trajectory Trace, where the participant draws a 3D trajectory in the air. This trajectory is then passed to another participant who attempts to erase it by retracing. The state of the game (Draw, Claim, Trace, or Reset), as well as the initiation and termination of drawing the curve, is controlled by touching a button (located on the avatar’s chest) with the less affected hand. Fifteen stroke patients with upper extremity hemiparesis participated in a pilot study, consisting of a three-week intervention. For each week, the participant performed three 1-h training sessions with one of three modalities: (1) VERGE system, (2) an existing VE based on Alice in Wonderland (AWVR), or (3) a home exercise program (HEP). Participant kinematics was captured with the Xsens 3D motion tracker system (MVN, Xsens, Culver City). Arm displacement averaged 350 m for each VERGE training session. Arm displacement was not significantly less when using VERGE than when using AWVR or HEP. Participants were split on preference for VERGE, AWVR or HEP. The VERGE system was found to be an effective means of promoting repetitive practice of arm movement by 85% of the participants and almost all of them indicated a willingness to perform the training for at least 2–3 days per week at home (Triandafilou et al., 2018). Finally, Thielbar et al. (2020) evaluated the VERGE performance with stroke patients during longitudinal studies in a laboratory environment and in participants’ homes. Active arm movement was tracked throughout therapy sessions for both studies. In the end, patients achieved considerable arm movement while using the system. Mean voluntary hand displacement was greater than 350 m per therapy session for the VERGE system. Compliance for home-based therapy was high since 94% of all scheduled sessions were completed. Having multiple players led to longer sessions and more arm movement than when the patients trained by themselves, corroborating the benefits of this multiuser VR system (Thielbar et al., 2020).

It is important to highlight that familiarity with the environment may have an impact on the level of engagement. O’Brien (2007) replicated the kitchen of three individuals with stroke in VR. The VE was shown using an HMD and navigation was facilitated indicating the direction to the investigator through basic hand gestures using their affected limb. The level of engagement of the participants while navigating the virtual version of their kitchen or an unfamiliar one was assessed using a semi-structured interview, questionnaires, and variations in skin conductance. Higher levels of engagement were associated to greater familiarity with the environment (O’Brien, 2007).

3.2. Supermarkets

Supermarkets have also been repeatedly simulated in VR as shopping involves planning and execution skills, which are also likely affected after an injury to the brain (Josman et al., 2006; Kang et al., 2008; Raspelli et al., 2012; Yip and Man, 2013; Sorita et al., 2014; Yin et al., 2014; Cogné et al., 2018; Ogourtsova et al., 2018; Demers and Levin, 2020; Spreij et al., 2020b; Table 2). Shopping is a high-demanding activity with strong association to independence, which is difficult to assess in real-life environments and difficult to replicate in the clinical setting (Maggio et al., 2019). These factors have motivated the use of virtual supermarkets to assess and train individuals with cognitive disorders. The search resulted in 11 studies, two non-experimental and nine experimental, involving 312 participants with stroke, 33 participants with TBI, 37 participants with ABI of unspecified aetiology, and 134 HC. Virtual supermarkets have mainly focused on executive functions (n = 7), but also on memory (n = 3), attention (n = 1), upper-limb motor function (n = 2), and neglect (n = 1).

From the 11 studies, four involved the Virtual Action Planning Supermarket (VAPS), a virtual supermarket where users are required to purchase seven items from a list of products. In the first study, Josman et al. (2006) examined the feasibility of using VAPS to assess and treat executive functions deficits of 26 individuals with stroke. The distance traveled, number of purchased items, correct and incorrect actions, number of pauses, and the time required to pay was used to assess performance in the virtual task. The VE was shown on a screen and participants used a keyboard and a mouse to navigate and select objects. Participants’ executive functions were assessed with the Behavioral Assessment of Dysexecutive Syndrome (BADS) (Wilson et al., 1996). Moderate correlations between virtual and clinical measures were found between the number of items purchased, mean correct actions, mean duration of the pauses, and the key search subtest from the BADS (r = 0.40, r = 0.47, and r = −0.44, p < 0.05, respectively) (Josman et al., 2006). In a later study, Sorita et al. (2014) studied the VAPS with 12 stroke, 33 TBI, and 5 with unspecified ABI aetiology participants. Performance in the VAPS and neuropsychological functioning (episodic memory, prospective memory, working memory index, perceptual organization index, processing speed, go/no-go errors, divided attention omissions, visual scanning omissions) was assessed and they also collected data from the Community Integration Questionnaire (CIQ) (Willer et al., 1993). A principal component analysis raised a 4-factor model that accounted for 70% of the total variance. Authors speculated that performance in the VAPS could be partially associated to neuropsychological processes, as measured by the assessment tools. Results pointed out that performance in the virtual supermarket could not be explained by executive functions alone, but it may have involved other cognitive processes, such as episodic and prospective memory, divided attention, and visual scanning (Sorita et al., 2014). In a third study, Josman et al. (2014) compared the performance of 24 stroke participants to that of 24 HC in the VAPS and the BADS test. Stroke participants were also assessed with the Revised Observed Tasks of Daily Living (OTDL-R) (Diehl et al., 2005), a performance-based test of everyday problem solving. VAPS was correlated with the BADS in the experimental group. Interestingly, the number of items purchased and the number of correct actions significantly correlated with the OTDL-R (r = 0.64, p = 0.001, and r = 0.68, p = 0.0001, respectively). In the latest study, Cogné et al. (2018) used VAPS to investigate how high-load non-contextual auditory stimuli affects navigational performance in stroke patients and its correlation with dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAPS. The control condition did not have auditory stimuli. To assess executive functioning they used the Groupe de réflexion pour l’évaluation des fonctions exécutives (GREFEX) battery. The 40 stroke patients navigational performance decreased under the 4 conditions with non-contextual auditory stimuli, especially for those with dysexecutive disorders. Lower performance was related with more GREFEX tests failed in the 5 conditions. Patients felt significantly disadvantaged by the non-contextual sounds from living beings, supermarket objects and names of other products as compared with beeping sounds. Also, patients’ recall of the collected objects was significantly lower under the condition of names of other products. Moreover, left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the five auditory conditions. One of the most important outcomes of this study was that non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli (Cogné et al., 2018).

Kang et al. (2008) simulated a virtual shopping task in another virtual supermarket that consisted of four aisles and four glass fronted fridges with 50 different items. The VE was displayed in an HMD and a joystick enabled navigation and item selection. The experimental sessions with the system consisted of three sub-tasks: (1) find three displayed items (assessement the interface and the visuospatial functions: visual attention, visuomotor coordination, and visual organization); (2) select the highlighted items with auditory or visual cues, and respond to unexpected events such as item dropping (assessment of immediate and delayed recognition memory, auditory and visual memory, and attention); and (3) select a shopping item according a given request and a designated price (assessment of planning, problem solving, and calculation). Authors compared the performance of 20 stroke participants with that of 20 HC and found significant differences between groups in the performance index, interaction error, delayed recognition memory score, auditory memory score, visual memory score, attention index, attention reaction time, and executive index (Kang et al., 2008).

In another study, Raspelli et al. (2012) evaluated a virtual version of the Multiple Errands Test (Alderman et al., 2003), the VMET, which was designed in a cost-free VR platform based on open-source software. The VE, consisting in a virtual supermarket with several shelves that displayed different items, was presented on a screen and interaction was facilitated with a joypad. Three groups of participants were included in the study and were required to select and buy several products in the VMET: nine individuals post-stroke, 10 young HC, and 10 older HC. Results showed moderate to strong correlations between performance in the VMET and the Test of Attentional Performance (Zimmerman and Fimm, 1992) for participants with stroke. Specifically, correlations were found between the time to complete the task in the VMET and the subtest for the state of alert with warning sign (r = 0.762, p = 0.028), total errors in the VMET with the subtest of incompatibility (r = 0.75, p = 0.019), and inefficiencies in the VMET with the subtest of attention shift with valid stimulus (r = 0.67, p = 0.045). Significant differences emerged among the three groups in the VMET. In general, young adults performed better than elderly adults, and both young and older HC participants performed better than individuals with stroke (Raspelli et al., 2012).

Yip and Man (2013) developed a virtual supermarket to be presented on a screen and interacted with a joystick or keyboard, according to each participant preference. The system consists of different tasks, such as buying discounted items, memorizing and picking items from a list, and discriminating those items that have a special tag from the rest, which allows to train prospective and retrospective memory, and inhibition, respectively. A total of 37 individuals with ABI of unspecified aetiology participated in a 5 to 6-week RCT. Participants were divided into an experimental group (n = 19), which trained prospective and retrospective memory and inhibition in the virtual supermarket, and a control group (n = 18), which participated in reading activities and table games. Prospective memory was assessed before and after intervention with the Cambridge Prospective Memory Test (CAMPROMT) (Wilson et al., 2005). Further assessments included the Hong Kong List Learning Test (Chan and Kwok, 2009), the Frontal Assessment Battery (FAB) (Dubois et al., 2000), the Word Fluency Test (Marsh and Hicks, 1998; Henry and Crawford, 2004), the Community Integration Questionnaire (Willer et al., 1994), a customized self-efficacy questionnaire, and a battery of tasks in a real convenience store. Participants had to perform three event-based tasks (calling back home when participants saw particular products or writing down the price of particular drinks), two time-based tasks, and had to shop for eight items according to a shopping list. The experimental group, in addition, completed a VR-based test that consisted of three event-based and three time-based tasks with a difficulty comparable to the most difficult level in the VR program. Results showed moderate correlations between the real life event-based tasks and most of the VR-based measures, including accuracy of event-based (r = 0.58, p < 0.01), time-based (r = 0.48, p < 0.05), and on-going tasks performance (r = 0.52, p < 0.05). The accuracy of time-based tasks in both VR and real life were also moderately correlated (r = 0.55, p < 0.05). After the intervention, significant improvements were found in the experimental group in most VR-based measures, such as, immediate recall, on-going tasks performance, and number of time checks; and in event-based (p < 0.01) and time-based tasks (p < 0.01) of the real-life test. No significant difference was found in any outcome measure for the control group (Yip and Man, 2013).

Virtual supermarkets have also been used to train upper-limb function. Yin et al. (2014) used a virtual supermarket where participants had to pick a virtual fruit from a shelf and release it into a virtual basket as many times as possible within a two-minute trial. The VE recreated a local supermarket aiming to increase familiarity and engagement. Interaction was facilitated by a Sixense hand-held remote controller (Sixense Entertainment, USA) in such a way that participants held the controller with their affected hand, and their movements were detected and transferred to a virtual hand avatar in the VE. In a RCT with this simulated supermarket, participants were randomly divided into an experimental group (n = 11) and a control group (n = 12). The experimental group received nine 30-min VR-based training sessions plus additional physical and occupational therapy. The control group received time-matched physical and occupational therapy. Assessment of motor function included the FMA (Fugl-Meyer et al., 1975), the Action Research Arm Test (ARAT) (Lyle, 1981), the Motor Activity Log (Taub et al., 1993) and the Functional Independence Measure (FIM) (Grimby et al., 1996) and was conducted at baseline, post intervention, and 1 month post-intervention. Although both groups improved their performance, results showed non-significant differences in all clinical measures either from baseline to post-intervention, or to follow-up (Yin et al., 2014). Recently, Demers and Levin (2020) explored if reach-to-grasp movements in a low-cost 2D VE were kinematically similar to those made in a physical environment (PE) in healthy controls and stroke patients. In their study, participants (HC = 15, Stroke = 22) had to make unilateral and bilateral reach-to-grasp movements in a 2D VE and a similar PE. The VE was a grocery-shopping task with two scenes representing aisles filled with typical supermarket items presented in a large screen and interacted through a Kinect II camera. The hands and forearms were represented by avatars viewed from a first-person perspective. The PE included only the object to be grasped since additional objects in would have interfered with the ability of the camera to track the arm movement. Temporal and spatial characteristics of the endpoint trajectory, arm and trunk movement patterns were compared between environments and groups. Hand positioning at object contact time and trunk displacement were unaffected by the environment. Compared to PE, in VE, unilateral movements were less smooth and time to peak velocity was prolonged. In HC, bilateral movements were simultaneous and symmetrical in both environments. In stroke, movements were less symmetrical in VE. Authors considered that using a low-cost 2D VE might be a valid approach for upper-limbs rehabilitation after stroke (Demers and Levin, 2020).

Ogourtsova et al. (2018) examined the feasibility of a functional shopping activity, the Ecological VR-based Evaluation of Neglect Symptoms (EVENS), in the assessment of USN. The EVENS consists of simple and complex 3D scenes depicting grocery-shopping shelves, where joystick-based object detection and navigation tasks are performed while seated. The authors compared the effects of virtual scene complexity on navigational and detection abilities in right hemisphere stroke patients with USN (n = 12) and without USN (n = 15) and in age-matched HC (n = 9). Participants with USN demonstrated longer detection times, larger mediolateral deviations from ideal paths and longer navigation times in comparison to participants without USN and HC. The EVENS detected lateralized and nonlateralized USN related deficits, performance alterations that were dependent or independent of USN severity, and performance alterations in 3 USN participants compared to HC (Ogourtsova et al., 2018).

In a different perspective, Spreij et al. (2020b) examined the feasibility and user-experience of a virtual supermarket by comparing non-immersive VR through a computer monitor and immersive VR trough a HMD. The virtual supermarket was modeled according to a regular Dutch supermarket with 18 shelves, eight cash registers, 20.000 real brands products (e.g., bread, fruit, vegetables) and freezing compartments. The VR task consisted in finding three products from a shopping list, and passing through the cash registers. Two different grocery lists were randomly presented over three trials (15 min each), and participants were asked to recall the products. In total, 88 stroke patients and 66 HC performed the VR task twice, with the computer monitor (a wired Xbox 360© controller was used to navigate) and with the HMD (Oculus Rift DK2© with an Xbox 360© or the HTC Vive© with its own controllers), being these conditions randomized. Although both stroke patients and HC reported an enhanced feeling of engagement, transportation, flow, and presence when tested with a HMD, more negative side effects were experienced, especially with the older Oculus Rift DK2. Most stroke patients had no preference for one interface over the other, yet younger patients tended to prefer the HMD. The HMD seems preferable in neuropsychological assessment since it induces more natural behavior, but a computer monitor remains a valid alternative (Spreij et al., 2020b).

3.3. Shopping malls

In addition to the required skills to cope with supermarkets, shopping malls require navigation and search for specific locations in more open spaces. The complexity of the distribution of shops and their interconnecting walkways, and the sensory overload can pose a challenge for individuals with attention deficits, as those commonly experienced after an injury to the brain, and consequently affect cognitive and motor performance (Koch et al., 2018). These factors have motivated the creation of virtual shopping malls for assessment and/or rehabilitation of cognitive and/or motor functions (Table 3). Ten studies have been included in this analysis, which involved 62 individuals with stroke, 67 individuals with TBI, and 233 healthy individuals, in four validation studies, three pilot studies, two evaluation studies, and a single-subject design. Shopping malls have been used to assess or train attention (n = 1), executive functions (n = 9), memory (n = 2) and upper limb function (n = 2). Eight of the ten studies used the same VE, the VMall, or its evolution, the VIS (Rand et al., 2007, 2009a,b,c; Hadad et al., 2012; Erez et al., 2013; Jacoby et al., 2013; Nir-Hadad et al., 2015). The remaining two studies used a touch panel or a keyboard and a mobile phone to interact with the VE (Okahashi et al., 2013; Canty et al., 2014).

The VMall simulates a large mall where users can navigate through aisles. The VE runs on the IREX® platform (GestureTek, Toronto, Canada) and is interfaced by sustained arm reaches, which are captured through color tracking. The VMall engage participants in a complex everyday shopping task that trains upper extremities and executive functions, primarily planning, multitasking, and problem solving. Rand and colleagues investigated the potential of VMall as an evaluation tool in a study that compared the performance of 14 post-stroke participants to that of 93 HC (Rand et al., 2007). All participants were required to shop four grocery items in the VMall and complete the Short Feedback Questionnaire (SFQ) (Witmer and Singer, 1998) and the Borg’s Scale of Perceived Exertion (Borg, 1990). Total shopping time was, in general, significantly longer for participants with stroke, possibly due to their overall impaired motor control and EF. Significant differences were found for the Borg’s Scale of Perceived Exertion between the groups [F(3,61) = 5.9, p < 0.001] while no significant differences emerged in the SFQ. A later study by the same group investigated the effectiveness of an in-home intervention with the VMall in individuals with stroke (Rand et al., 2009a). Six stroke participants received ten 1-h treatment sessions over a period of 3 weeks and the motor and functional abilities of their weaker upper extremity were assessed before and after the intervention. Results showed a relative improvement in the FMA (0.24 ± 0.24), the ability score of the Wolf Motor Function Test (WMFT) (0.30 ± 0.34), and in the numbers of tasks performed using both hands of the Questionnaire of Upper Extremity Function in Daily Life (Broeks et al., 1999) (0.22 ± 0.4). In a later study, authors explored the effectiveness of an intervention with the VMall to improve multitasking deficits after stroke (Rand et al., 2009c). Four participants received ten 60-min sessions with the system over 3 weeks. Assessment included the Multiple Errands Test in both a real mall and the VMall. Participants showed improvements in most of the error-based measures (total number of mistakes, rule break mistakes, inefficiency mistakes, use of strategies mistakes) that ranged from 20.5 to 51.2%. The ecological validity of the VMall was analyzed in a later study by the same group (Rand et al., 2009b). Authors compared the performance of three groups of participants (stroke, young and older HC participants) in an adaptation of the Multiple Errands Test that was administered in a real shopping mall and in the VMall. The executive function of stroke participants was assessed by the Zoo Map Test of the BADS and their level of independence during instrumented ADLs by an ad-hoc questionnaire. Significant moderate to high correlations were found between the performance in the real and virtual scenario for both post-stroke participants [total number of mistakes (r = 0.70, p = 0.036), partial mistakes (r = 0.88, p = 0.002), inefficiency mistakes (r = 0.73, p = 0.025)] and older healthy participants [total number of mistakes (r = 0.66, p = 0.01), complete mistakes (r = 0.58, p = 0.01), partial mistakes (r = 0.61, p = 0.01), and inefficiency mistakes (r = 0.66, p = 0.01)]. The virtual version of the test was able to differentiate between younger and older healthy participants, and also between healthy and stroke participants. Concerning stroke participants, significant moderate to high correlations were found between the Zoo Map test and performance in both the real and virtual version of the Multiple Errands Test. Specifically, the Zoo Map Test correlated with the number of errors (r = −0.93, p < 0.000), partial mistakes in completing tasks (r = 0.80, p < 0.009), non-efficiency mistakes (r = 0.86, p < 0.003), and the time to complete the Multiple Errands Test (r = 0.79, p < 0.012) and also, with the non-efficiency mistakes in the virtual version of the test (r = −0.76, p < 0.04). Also for participants with stroke, significant moderate to high correlations were found between scores in the questionnaire and rule breaks in the Multiple Errands Test (r = 0.80, p < 0.03), and mistakes in both the real (r = −0.76, p < 0.04), and the virtual version of the test (r = −0.82, p < 0.02). In another study, Jacoby and colleagues performed a pilot RCT to examine the effectiveness of the VMall at improving EF after TBI (Jacoby et al., 2013). Twelve participants were randomized either into an experimental group, who trained planning and execution of shopping tasks with restricted budget in the VMall, or a control group, who participated in a conventional occupational therapy program. All the participants received ten 45-min and were assessed before and after the intervention with the Multiple Errands Test and the Executive Function Performance Test (Baum et al., 2009). No significant differences between groups emerged before and after intervention. However, participants in the experimental group showed greater relative improvement in comparison to the control group in both outcomes (z = −1.761, p = 0.046; and z = −1.761, p = 0.046, respectively). Finally, the VMall was also used in a study that compared the performance of 20 children with TBI in a simple virtual shopping task with that of 20 typically developing peers (Erez et al., 2013) to determine the feasibility of the VE in pediatric population with TBI. Participants were required to shop four items in the VMall and their planning abilities were evaluated with the Zoo Map subtest of the BADS test for children. Different performance between groups was detected in the mean shopping time (z = −3.05, p = 0.002) and number of mistakes (z = −1.95, p = 0.068), which were significantly higher for children with TBI. In addition, time to complete the shopping task in the VMall and the Zoo Map score correctly classified 75% of the participants into each group, whereas time to complete the shopping task alone correctly classified 65% of the participants.

Two studies included in this review used the newer version of the VMall, the VIS, which, unlike its predecessor, allows for creating different shopping malls by changing and customizing the stores, and types, quantities, and locations of the groceries in each store. The VE runs on the SeeMe system (Brontes Processing, Poland), and is similarly interfaced by sustained arm reaches, but detected with the Microsoft Kinect. Navigation within and between shopping aisles and selection of groceries is facilitated by virtually “touching” directional arrows and “hovering” over photos of the groceries, respectively. Nir-Hadad et al. (2015) compared the shopping performance of stroke and healthy participants in three different environments: a real environment (hospital cafeteria), a store mock-up (physical simulation), and the VIS. Five stroke individuals and six HC were required to purchase four grocery items in the VIS and the store mock-up, and to repeat the task with budget constraints in all three environments. Results showed that post-stroke participants required more time to finish the task than HC in all the environments. In addition, for both groups, time to complete the task within the VIS was longer than that in the store mock-up and the cafeteria (Hadad et al., 2012). A later study by the same authors examined the discriminant, construct-convergent, and ecological validity of the purchasing task for assessing the performance in the instrumental ADLs. A supermarket, a toy store, and a hardware store were recreated in the VIS, and 19 people with stroke and 20 HC performed the shopping task in the VE and a real shopping environment (the same hospital cafeteria of the previous study) in counterbalanced order. Executive functions of all the participants were also assessed, among other tests, with the Rule Shift Cards subtests from the BADS (Wilson et al., 1996), and the Telephone Use and Bill Payment subtests of the Executive Function Performance Test. Concerning the discriminant validity, the control group significantly required less time to complete the shopping task (U = 55.50, p = 0.001) and traveled shorter distance (U = 81.00, p = 0.002) in the VIS than participants with stroke. Best performance was detected in HC in the cafeteria, evidenced by significantly fewer budget excesses (U = 111.00, p = 0.007) and assistance from the cashier (U = 96.50, p = 0.008). Concerning convergent and ecological validity, significant moderate correlations emerged between the performance of both the control group [time to the first purchase (r = −0.49, p < 0.05), total time (r = −0.47, p < 0.05), and number of errors (r = −0.46, p < 0.05) and the stroke group (distance traveled in VIS and time to the first purchase in the cafeteria: r = 0.61, p < 0.01)]. Better performance in the clinical assessments of EF was related to better performance in the VIS for all participants. For HC, the Telephone Use significantly correlated with the time of total purchase in the VIS (r = 0.51, p < 0.05), and the Rule Shift Cards significantly correlated with the distance traversed in the VIS (r = −0.57, p < 0.01). For stroke participants, significant correlations emerged between the Bill Payment and the time until the first purchase (r = 0.57, p < 0.05) and for total purchase in the VIS (r = 0.56, p < 0.05), and also between the Telephone Use and the time for total purchase in the VIS (r = 0.55, p < 0.05) and between the Rule Shift Cards and the time until the first purchase in the VIS (r = −0.53, p < 0.05) (Nir-Hadad et al., 2015).

Okahashi and colleagues developed the VST, a virtual Japanese-style shopping mall, to assess general cognitive function (Okahashi et al., 2013). In the virtual test participants are required to memorize items to buy, look for specific shops on a street, choose items in a shop, and perform different tasks. The VE is presented a multi-touch display and interacted through finger touches. The convergent validity of the VST with conventional tests [Mini-Mental State Examination (MMSE) (Folstein et al., 1975), Symbol Digit Modalities Test (Smith, 1982), Simple Reaction Time Task (Beck et al., 1956), Rivermead Behavioral Memory Test (Wilson et al., 1987a), Everyday Memory Checklist (Kazui et al., 2006)] and the effect of age were investigated in a study that involved five stroke and five TBI participants and ten age-matched HC. Results evidenced moderate to high correlations between measures of the VST and conventional tests. Regarding attention, total time in the VIS correlated with the completing rate of the Symbol Digit Modalities Test (r = −0.80, p < 0.01) and the correct rate of the Simple Reaction Time Task (r = −0.89, p < 0.01). Regarding memory, bag use in the VST correlated with the pictures score of the Rivermead Behavioral Memory Test (RBMT) (r = −0.65, p < 0.05), list use correlated with the standard profile score (r = −0.71, p < 0.05), the belonging score (r = −0.67, p < 0.05), and appointment score of the of the RBMT (r = −0.73, p < 0.05), number of correct purchases correlated with date score of the RBMT (r = 0.67, p < 0.05), and total time correlated with the standard profile score (r = −0.71, p < 0.05), and the appointment score of the RBMT (r = −0.88, p < 0.01). In addition, participants with brain damage required more hints (p < 0.05) and made more movements (p < 0.05) to perform the task than healthy controls. Older healthy participants significantly spent more time to perform the task than younger HC (p < 0.01).

Another virtual mall, the Virtual Reality Shopping Task (VRST), was presented by Canty and colleagues (Canty et al., 2014). The VE simulated a shopping center where you can navigate through different stores and interact with a mobile phone, a stores map, and a list of tasks. The VE is presented in a TV screen and interacted with a keyboard. The authors evaluated the sensitivity of three VR-based shopping tasks, their ecological validity, and their convergent validity with the Lexical Decision Prospective Memory Task (LDPMT) (Einstein and McDaniel, 1990). Thirty individuals with severe TBI and 24 HC were required to purchase items in a pre-specified order in a selection of shops, texting in a virtual mobile phone at different moments, and pressing a key when a sale announcement was heard. Results showed that the performance of individuals with TBI was significantly worse than that of controls on time and event-based tests in both the VRST and the LDPMT. Results of the participants with TBI showed moderate correlations between event-based components and performance on the prospective memory tasks of the VRST with the LDPMT (r = −0.657, p < 0.001) and with the total prospective memory performance on the VRST and event-based performance on the LDPMT (r = 0.662 p < 0.001) (Canty et al., 2014).

3.4. Streets

Sixteen studies have been included in this review that involved 418 individuals with stroke, 90 individuals with TBI, 6 ABI individuals with unspecified etiology, six individuals with other type of brain injury including cerebral tumor and cortical cyst and 102 healthy participants. Streets have been recreated in VR to assess or train Unilateral Spatial Neglect (USN) (n = 6), driving skills (n = 4), and route retraining (n = 4) (Table 4).

Safe street crossing is specially demanding for individuals with hemispatial neglect, as it requires dealing with cars coming from both sides. Three studies included in this review used the City Street. The VE consists on typical local city street with an avatar facing a crosswalk and vehicles approaching from different directions at different speed, and allows for practicing street crossing with different levels of difficulty. The VE was shown on a 15-inch PC monitor or projected on a wall, and the interaction facilitated with a keyboard. In a first study, Naveh et al. (2000) carried out a feasibility study comparing the performance of six individuals with stroke (four with USN) and six HC in the City Street. Participants performed a number of sessions with the VE that varied from one to four and had different durations (from 30 to 60 min). In each session, participants practiced street crossing with progressive difficulty as they completed the previous levels of difficulty. Although no statistical differences were examined, participants with stroke seemed to require more time to complete the task (Naveh et al., 2000). A later study by the same group, with the same procedure and number of participants, reported that HC took less time to complete the task, looked less frequently at oncoming traffic, and had fewer accidents than patients when the difficulty increased. As in the previous study, no statistical analysis was performed. In this latter study, the authors also examined the effect of display size on subject performance in the virtual task and found that, although participants looked to both sides more often when the VE was displayed on a projection, they had more accidents and needed more time to complete the tasks (Weiss et al., 2003). Again in a later study, the same group evaluated the effectiveness of City Street in a sample of post-stroke USN participants. Participants were pseudo-randomized to a control group (n = 8), who trained with a computer-based visual scanning task, or to an experimental group (n = 11), who trained street crossing in the City Street. Intervention in both groups consisted on twelve 45-min sessions administered three times a week. Participants were assessed before and after the intervention, with a battery of tests that evaluated the level of neglect [Star Cancellation Test of the Behavior Inattention Test (BIT) (Wilson et al., 1987b) and the Mesulam Cancelation Test (Mesulam, 2000)], and performance during street crossing in the VE (number of times that participants looked to the left and number of accidents) and in the real world (number of times that participants looked to the left, and decision time). Performance in the real world was assessed from videotaped recordings. Although performance of both groups seemed to improve after the intervention, no remarkable differences appeared between groups (Katz et al., 2005).

Navarro et al. (2013) also developed a VR system to practice street-crossing, which consisted of a 47-inch LCD screen that displayed the VE, an infrared tracking system that detected head turns, and a joystick that enabled navigation in the virtual world. The VE is shown from a first-person perspective and consists on a crosswalk that intersects two two-way roads with median strips with traffic approaching simulating real-life. Participants were required to safely navigate from an origin point to an end at the other side of the road and come back. A total of 15 HC and 32 stroke participants (17 with USN and 15 with no USN) were included in a validation study. Authors found that HC had better performance in terms of time to complete the task and safety. Similar tendency emerged when considering participants with stroke: participants without USN finished the task faster (F = 28.9, p < 0.01) and more safely (F = 55.8, p < 0.01) than those with USN. Importantly, the presence of neglect was a significant predictor of the number of accidents (t = 6.5; p < 0.001). Regarding convergent validity, the time to complete the virtual task and the number of accidents had moderate to strong correlations with timed tests, such as the Conner’s Continuous Performance Test (Conners et al., 2003) (r = 0.5–0.6, p < 0.05) and the Color Trail Test (CTT) (D’Elia et al., 1994) (r = 0.55–0.75, p < 0.05). In addition, the score in the BIT showed moderate to strong correlations with the number of accidents (r = −0.77, p < 0.01) and the number of head turns to the left (r = 0.4, p < 0.05) (Navarro et al., 2013).

Kim et al. (2007) have also proposed a HMD system to train stroke patients with USN for street crossing. The training procedure consisted of completing missions while keeping the virtual avatar safe when crossing the street. While searching for a virtual vehicle appearing on the right or left side of an avatar, the subject had to respond to the system by pushing the mouse button when he found the car. When the subject did not recognize the car’s movement at the two-third distance from a starting position to the avatar position, the car turned on headlights (visual cue) to notify the subject that the car was approaching an avatar. Despite the visual cue, if he could not recognize the car’s movement, the car would give him an alarm sound at one-third of the distance from a starting position to the avatar position. The system’s difficulty was controlled by level (car velocity) and stage (distance between an avatar and a subject). The authors compared the performance of 10 stroke patients with USN with the performance of 40 HC considering the following parameters: deviation angle, reaction time, right and left reaction time, visual cue, auditory cue and failure rate of mission. Additionally, traditional methods such as line bisection and cancellation tests were analyzed. Results showed that the proposed VE system was proper to USN training and had an effect in the stroke group (r = 0.81) (Kim et al., 2007).

Finally, for virtual road crossing assessment, Wagner et al. (2021) developed the iVRoad to detect discrete symptoms of USN in right-hemispheric post-stroke patients. The task consisted of dropping a letter in a mailbox on the way to work and was presented through an HTC Vive. To do so, the users first had to safely cross two roads and the square in between, and then return to the starting position to continue his/her way to work. The authors performed a study with 18 stroke patients to evaluate iVRoad with respect to usability, satisfaction, sense of presence and cybersickness. Moreover, they examined patients with and without USN and identified parameters for distinguishing patients with and without USN, such as the decision time, the error rate and the head direction ratio. The interaction with iVRoad through the HTC Vive Controller could be used without difficulties by all patients (Wagner et al., 2021).

Driving simulators have been used in three studies to examine the driving skills of persons with ABI (Wald et al., 2000; Akinwuntan et al., 2005; Devos et al., 2009; Wagner et al., 2021). Wald et al. (2000) presented the DriVR, a VR driving simulator that consisted of ten testing scenarios presented in an HMD, which appear in a continuous sequence as the participant drives through a small town roughly 1.4 km square. The authors compared indicators of driving ability of 28 adults with TBI in the DriVR with their performance in an on-road test and in a video test, the Driver Performance Test II, and with their performance in the Trail Making Test (TMT) (Reitan, 1958) and Adult Visual Perception Test (Baylor University Medical Center, n.d.) (Wald et al., 2000). Performance in the VR test did not show any correlation with the on-road and video driving tests, and only reached moderate correlations for the ability to maintain the lane while passing parked cars and as oncoming cars pass in the DriVR with the number of fails in the on-road test (r = 0.56 and r = 0.50, respectively). Poor correlations with no statistical significance were found between performance in the DriVR and in the neuropsychological assessment tests. In another study, the same group explored the effect of a simulator-based training on driving after stroke. The simulator consisted of a full-bodied car with all its original mechanical parts and the VE was projected on a screen of size 2.3×1.7 m and a visual angle of 45°. The VE represented an interactive 13.5 km scenario. A total of 73 stroke individuals participated in this study and were randomly allocated in to either an experimental (simulator-based training) or control (driving-related cognitive tasks) group. Both groups completed a total of 15 one-hour sessions administered three times a week and were assessed before and after the intervention with an on-road test and with the Stroke Driver Screening Assessment, and 6 months after the intervention with an official pre-driving exam. Although both groups improved after the training, no significant differences were detected between groups in any measure but in the sign recognition. However, a greater number of participants in the experimental group improved their classification in the on-road test, even though the difference did not reach statistical significance. More importantly, 73% of the participants in the experimental group were legally allowed to drive, in contrast to 42% of those in control group (Akinwuntan et al., 2005). A later item-per-item analysis of the data showed that the training in the simulator provided greater improvements in visual behavior, perception and anticipation of the road signs and the traffic, and turning left (Devos et al., 2009).

Virtual streets have also been used to assess and rehabilitate skills related to route retraining (Titov and Knight, 2005; Lloyd et al., 2006, 2009; Sorita et al., 2013). Titov and Knight (2005) developed a system to simulate a street scene using photographs and sounds that are displayed and interacted on a touch screen to assess the ability to remember instructions. Two individuals with stroke and an individual with TBI completed three tasks in the virtual street. In the first task, participants had to remember five errands. In the second and third tests, participants had to follow instructions; the former had a list with the items to facilitate the task, but not the latter. Participants were also assessed with a battery of neuropsychological tests that included Digit Span and Word Lists sub-tests from the Wechsler Memory Scale-III (WMS-III) (Wechsler, 1997a), the National Adult Reading Test (Nelson and Willison, 1991), the Wechsler Adult Intelligence Scale-III (WAIS-III) (Wechsler, 1997b), the Wisconsin Card Sorting Test (WCST) (Heaton, 2005), FAS Test (Benton, 1968), and the Stroop Test (Spreen and Strauss, 1991). Performance of individuals with brain injury in the VE was compared to that of three-matched HC, and showed better performance of the HC (Titov and Knight, 2005).

Lloyd et al. (2006, 2009) used an off-the-shelf gaming console, the Playstation 2 (Sony, Tokyo, Japan) and a driving videogame, the Driv3r (Reflections Interactive, Newcastle upon Tyne, UK) to examine the effectiveness of errorless learning in comparison to traditional trial-and-error on route learning. The VE was displayed in a 21-inch TV screen and interaction was facilitated with a control pad that was operated by an experimenter. Participants (eight TBI, 6 stroke, and 6 with brain tumors or cortical cysts) were required to learn a route using both techniques and repeat it afterwards. Significant differences were found between the number of errors made under both conditions, with errorless training showing less errors (Lloyd et al., 2006, 2009). In another study, Sorita et al. (2013) examined the route learning ability of 27 individuals with TBI. Participants were allocated in one of two groups and were required to learn a route in either a real (n = 13) or a virtual scenario (n = 14). The VE simulated a virtual street without cars or pedestrians, was projected into a 2.5 m wide and 1.8 m tall screen and was interacted with a gamepad. Both groups had to recall the route twice in the corresponding scenario immediately after the experiment and another time after 24 to 48 h. In addition, participants had to draw a sketch map of the route, select the correct map that represented the route among four possible options, and finally, arrange 12 pictures of the route in chronological order. Results showed no significant differences between both groups but in the arrangement task, where subjects who practiced in the real environment had better performance (p = 0.01) (Sorita et al., 2013).

In a different approach to street VE, Park (2015) compared the incidence of driving errors among 30 participants with left or right hemispheric lesions due to stroke. Driving errors were assessed using a VR driving simulator (GDS-300, Gridspace). The test course simulated driving in downtown Seoul and on the highway and was designed to resemble actual driving, incorporating various buildings, moving cars, traffic signals, and road signs. Significant differences were shown in center line crossing frequency, accident rate, brake reaction time, total driving error scores, and overall driving safety between participants with left or right hemispheric lesions, corroborating that rehabilitation specialists should consider hemispheric function when teaching driving skills to stroke survivors (Park, 2015). Spreij et al. (2020a), have also used a simulated driving task to assess differences in driving performance between patients with left- (n = 33) and right-sided VSN (n = 7), recovered VSN (n = 7), without VSN (n = 53), and HC (n = 21), as well as measuring VSN severity; and the driving simulator performance diagnostic accuracy in comparison to traditional tasks. Stroke patients were tested with a cancellation task, the Catherine Bergego Scale and the simulated driving task, which consisted of a straight road without intersections or oncoming traffic projected on a large screen. Participants were seated in front of the screen with a steering wheel fixed on a table. The simulated driving speed was approximately 50 km/h and participants were instructed to use the steering wheel to maintain the starting position at the center of the right lane, which demanded participants to adjust their position continuously. When participants drove off the road, the projection vibrated as a warning sign. Patients with left-sided VSN and recovered VSN deviated more regarding position on the road compared to patients without VSN. The deviation was larger in patients with more severe VSN. Regarding diagnostic accuracy, 29% of recovered VSN patients and 6% of patients without VSN did show abnormal performance on the simulated driving task. The sensitivity was 52% for left-sided VSN (Spreij et al., 2020a).

Finally, Ettenhofer et al. (2019) developed the Neurocognitive Driving Rehabilitation in Virtual Environments (NeuroDRIVE), an intervention designed to improve cognitive performance, driving safety, and neurobehavioral symptoms in TBI patients. The authors used the General Simulation Driver Guidance System (Driving assessment and training method and apparatus, 2014). This simulator consisted of an 8-foot circular frame supporting a curved screen (180° field of view) and a driving console analogous to that found in a typical automobile. The driving console had turn signals, gas and brake pedals, a steering wheel, digital dashboard, and a seat belt. Participants sat in the console and operated the steering wheel and pedals while responding to the VE projected onto the screen and auditory stimuli from connected speakers. The authors conducted a feasibility study to examine the preliminary efficacy of NeuroDRIVE. The intervention consisted of six 90-min sessions that included: (1) a brief review of training and progress thus far; (2) practice of component cognitive skills such as dual processing, working memory, and response inhibition through the use of standardized cognitive driving scenarios; (3) practice of composite driving skills such as following the rules of the road and being vigilant for road hazards while simultaneously performing working memory or visual attention tasks; and (4) finishing with an open-ended race-track course to promote engagement in the process and to allow participants to safely “test the limits” of their skills in a simulated environment. Eleven participants who received the intervention were compared to six waiting-list participants on driving abilities, cognitive performance, and neurobehavioral symptoms. The cognitive assessment protocol was the following: WAIS-III Digit Span, Symbol Search and Coding (Wechsler, 1997b), TMT A and B (Reitan, 1958), Controlled Oral Word Association Test (Letters & Animals), California Verbal Learning Test (CVLT) (Delis et al., 1988); Grooved Pegboard (Ruff and Parker, 1993), Neurobehavioral Symptom Inventory (Cicerone and Kalmar, 1995), PTSD Checklist-Civilian (Ruggiero et al., 2003), Beck Depression Inventory-II (BDI-II) (Beck et al., 1956), Epworth Sleepiness Scale (Johns, 1991), Fatigue Severity Scale (Krupp et al., 1989), Short Form Health Survey-36 (SF-36) (Ware, 1993), and Satisfaction with Life Scale (Diener et al., 1985). Participants that performed the NeuroDRIVE intervention had significant improvements in working memory and selective attention, two of the primary objectives of the intervention. There was no generalization of improvements to other cognitive domains, neurobehavioral symptoms or driving skills (Ettenhofer et al., 2019).

3.5. Cities

Virtual cities have been considered in the literature to recreate daily living activities that involve moving around a wide area and visiting different locations to perform cognitive-demanding tasks that require basic cognitive functioning (Gamito et al., 2011a,b, 2014, 2015; Jovanovski et al., 2012; Vourvopoulos et al., 2014; Faria et al., 2016, 2019, 2020; Oliveira et al., 2020; Table 5). A total of eleven studies simulating cities have been included in this review, which involved 209 participants with stroke, five participants with TBI, one Mild Cognitive Impairment (MCI) and 74 healthy participants. Designs included four RCT’s, two feasibility studies, two validation studies, a case study, a pilot study, and a usability study.

Gamito and colleagues (2011) developed a small virtual town populated with several characters and buildings arranged in eight squared blocks, along with a two-room apartment and a mini-market. The VE was displayed in an HMD and interaction, based on moving around and grabbing objects, was enabled using a keyboard and a mouse. The VE required participants to perform daily activities such as finding a supermarket and buying some items, or finding and retaining paths, characters, or advertisements. The potential of the VR system to improve attention and memory after TBI was investigated in a case study consisting on nine sessions administered with unknown frequency, with a resulting mean duration of 45 min each. The participant was assessed with the Paced Auditory Serial Addition Task (Gronwall, 1977) before, during (after the fifth session), and after the intervention, with three and two-second inter-stimulus intervals. The results showed a significant increase in the percentage of correct responses between the previous and intermediate assessment for both trials and between the intermediate and final assessment (Gamito et al., 2011a). A similar procedure with the same VE and instrumentation, consisting on nine weekly sessions of unspecified duration, was replicated with two individuals with stroke, whose memory and sustained attention were assessed with the WMS-III (Wechsler, 1997a) and the Toulouse Piéron (TP) (Piéron, 1955), respectively (Gamito et al., 2011b). Like the previous study, the results revealed increased memory and attention capabilities after the intervention, although no statistical analysis could be performed. A later study by the same authors with the same system compared the impact of two possible displays, an HMD and a 21-inch monitor, on the effectiveness of the intervention (Gamito et al., 2014). Seventeen individuals with stroke were randomized into an HMD (n = 8) or a monitor (n = 9) display condition, and underwent the same intervention. In this later study, participants were also assessed with the Rey-Osterreith Complex Figure (ROCF) (Osterrieth, 1954). Results evidenced a significant improvement in the WMS-III (p < 0.01), the ROCF (p < 0.05), and the TP (p < 0.01) in both groups. However, no statistical differences emerged between the two conditions, which suggests that both HMD and screens could be valid alternatives for providing visual feedback during VR-based interventions on memory and attention. A RCT involving 20 individuals with stroke compared the effectiveness of a 4 to 6-week intervention with the same system including two to three 60-min sessions per week (n = 10) in comparison to HC (n = 10), consisting of a waiting list (Gamito et al., 2015). Participants were assessed with the same tests and procedure of the previous study. In contrast to the control group, who showed no improvement, the experimental group significantly improved their scores in the WMS-III and the TP, giving rise to significant differences between groups in both tests. No differences in time or between groups were detected in the ROCF. Subsequently, Oliveira et al. (2020) also performed a study to test this ecologically oriented approach, depicting everyday life tasks (Systemic Lisbon Battery), in a sample of 30 sub-acute stroke inpatients in a rehabilitation hospital. Participants were assessed in a single-arm pre-post intervention study revealing improvements on global cognition with the Montreal Cognitive Assessment (MoCA), executive functions with the FAB, memory with the WMS-III memory quotient and attention with the CTT execution time reduction (Oliveira et al., 2020).

With a more global approach, Jovanovski et al. (2012) developed the Multitasking in the City Test and investigated its convergent validity with a battery of neuropsychological tests in a sample of eleven stroke and two TB participants (Jovanovski et al., 2012). The VR system consists of eleven different buildings (from a post office to an optometrist’s office) and the participant’s home. Virtual elements were displayed in a computer monitor to be interacted with a joystick. Participants were required to purchase several items, obtain money from the bank, and attend a doctor’s appointment within a period of 15 min. Performance in the VR-based system was evaluated according to the completion time, tasks completed, task repetitions, insufficient funds, inefficiencies and task failures. Clinical testing included the Controlled Oral Word Association Test (COWAT) (Benton and Hamsher, 1989), Semantic Fluency (Animals), WCST (Berg, 1948), BADS (Wilson et al., 1996), TMT (Reitan, 1958), WAIS-III (Wechsler, 1997b), Judgment of Line Orientation (Benton et al., 1975), ROCF (Osterrieth, 1954), CVLT (Delis et al., 1988), and Wechsler Memory Scale-III (Wechsler, 1997a). With regards to executive functioning, good to excellent correlations emerged between the TMT Part B and the VE tasks completion time (r = 0.64, p = 0.02) and between the WCST and VE tasks completion time (r = 0.84, p < 0.01) and total errors (r = 0.60, p = 0.03). Moderate correlations were found between the TMT Part A and the VE tasks completion time (r = 0.59, p = 0.04) and between the Judgment of Line Orientation test and the VE tasks total errors (r = −0.56, p = 0.05) and completion time (r = −0.59, p = 0.03).

Vourvopoulos et al. (2014) developed the Reh@City, a virtual city that simulates several ADLs within a supermarket, a post-office, a pharmacy, and a bank, which aimed to train visuospatial orientation, attention, and executive function (Vourvopoulos et al., 2014). The VE was displayed in a 24-inch computer monitor, and a joystick enabled navigation and interaction. Participants were engaged in multiple tasks that involve visuospatial orientation (to navigate to appropriate places), attention (to select target elements among distractors), and executive function (to buy groceries or to withdraw cash euros from an ATM machine). Performance in the VR-based system was assessed according to the score, distance, and time spent in navigation or in task completion. In a preliminary study, the authors investigated the convergent validity of the Reh@City with the MMSE (Folstein et al., 1975) and the Stroke Impact Scale (SIS) (Duncan et al., 2003). A one-session pilot study with ten post-stroke participants (7 stroke, 2 TBI and 1 MCI) revealed a strong correlation between the score and distance traveled in the Reh@City with the MMSE (r = 0.81, p < 0.05, and r = 0.65, p < 0.05, respectively). In addition, mood stability and the control item of the SIS also showed good correlations with scores (r = 0.75, p < 0.05) and time spent in navigation (r = −0.72, p < 0.05) and task completion (r = 0.72, p < 0.05). A later RCT study with the Reh@City, carried by the same authors, included 18 post-stroke participants who were randomly assigned to an experimental training with the system (n = 9) or to a conventional intervention consisting on occupational therapy sessions (n = 9) (Faria et al., 2016). Participants underwent twelve 20-min sessions distributed from 4 to 6 weeks and were assessed with the Addenbrooke Cognitive Examination (ACE) (Mioshi et al., 2006), the TMT Part A and B, the Picture Arrangement subtest of the WAIS-III, and the SIS. Although both groups improved in almost all tests and subtests of the neuropsychological assessment battery, greater improvements were detected in those participants who trained with the VR system in the total score, and attention and fluency subtests of the ACE, and also in the MMSE. Subsequently, authors implemented a personalization and adaptation framework (Faria et al., 2018) in a Reh@City 2.0 version, also increasing the variety of locations for ADL’s simulations: magazine kiosk, fashion store, park, home. Reh@City 2.0 was compared with a content equivalent paper-and-pencil cognitive training tool, which follows the same personalization and adaptation framework, in an RCT with 36 stroke patients (Faria et al., 2020). The intervention comprised 12 sessions, with a neuropsychological assessment pre, post-intervention and follow-up, having as primary outcomes: general cognitive functioning (MoCA), attention (TMT Part A and B), memory (Verbal Paired Associates from WMS-III), executive functions (Digit-Symbol Coding, Symbol Search and Digit Span from WAIS-III) and language (Vocabulary from WAIS-III) specific domains; and as secondary outcome the self perceived impact of cognitive deficits in different aspects of everyday life (everyday life skills, family and life, mood and sense of self), measured by the Patient Reported Evaluation of the Cognitive State (PRECiS) (Patchick et al., 2015). Results revealed that the Reh@City v2.0 improved general cognitive functioning, attention, visuospatial ability and executive functions. These improvements generalized to verbal memory, processing speed and self-perceived cognitive deficits specific assessments. The paper-and-pencil intervention only had impact in the MoCA orientation domain, processing speed and verbal memory outcomes. However, at follow-up, processing speed and verbal memory improvements were maintained, and a new one was revealed in language. Between-groups, the Reh@City v2.0 was superior in general cognitive functioning (Faria et al., 2020). The authors also analyzed the session-to-session performance in this intervention in order to compare the paper-and-pencil with the ecologically valid VR-based approach (Faria et al., 2019). Results have shown that both groups performed at the same level and there was not an effect of the training methodology in overall performance. However, the Reh@City enabled a more intensive training, which may translate in more cognitive improvements.

Claessen et al. (2016) compared spatial navigation in real world and in a virtual city, which consisted on a photorealistic virtual rendition of a real city. Both routes were about 400 m and involved 11 decision points, where participants decided to take a left or right turn. A sample of 68 stroke participants and 44 HC navigated within a real and the virtual city, after which they completed eight subtasks addressing route knowledge (scene recognition, route continuation, route sequence and order) and integration of geometrical aspects (distance and duration estimation, route drawing, and map recognition). Significant poor to moderate correlations between virtual and real navigation were found for HC and stroke participants in route continuation (r = 0.27, p = 0.27, and r = 0.37, p = 0.013, respectively), order (r = 0.35, p = 0.03, and r = 0.31, p = 0.043, respectively), and distance estimation (r = 0.31, p = 0.12, and r = 0.56, p < 0.001, respectively). An additional significant poor correlation was found in the patients group for route sequence (r = 0.27, p = 0.029) (Claessen et al., 2016).

3.6. Other everyday life scenarios

Other everyday life scenarios have been simulated in VR for cognitive rehabilitation (Table 6). In this section, 10 studies have been included (3 RTC’s, 5 pilot study, 1 usability study, and 1 prospective cross-sectional study) involving 114 stroke, 116 TBI, two brain tumour, 11 MS, 87 HC, 12 intracerebral hemorrage, 1 encephalitis and 6 degenerative disease participants.

Targeting the assessment of executive functions, Renison et al. (2012) developed the Virtual Library Task, a virtual replica of a real library where participants are required to perform different tasks associated to daily routine in a library (Renison et al., 2012). Participants must prioritize and complete tasks, such as cooling down the library or checking items that appear in the in-tray, while managing interruptions and acquisition of new information. The VE is displayed in a computer monitor and is interacted using a gamepad. The authors compared the performance of 30 TBI and 30 HC participants in the virtual and the real life scenario in two different 90-min sessions and were also assessed with the Verbal Fluency Test (Alderman et al., 2003), the WCST (Heaton, 2005), the Brixton Spatial Anticipation Test (Burgess and Shallice, 1996), the Zoo Map and MSET from the BADS (Wilson et al., 1996). Results showed that performance on the virtual and real tests had significant weak to moderate correlations that were evidenced in the total score (r = 0.68, p > 0.01) and scores of the subtests, which included task analysis (r = 0.27; p = 0.04), strategy generation and regulation (r = 0.77; p = 0.01), prospective working memory (r = 0.53; p = 0.01), response inhibition (r = 0.54; p = 0.01), and timed (r = 0.48; p = 0.01) and event-based prospective memory (r = 0.73; p = 0.01). Although both groups had similar cognitive condition, with only one difference in the MSET (p = 0.02), TBI participants performed significantly worse than HC in the VE, which drew significant differences for total score, prospective working memory, dual tasking, and timed and event-based prospective memory (Renison et al., 2012).

With the same objective of the pervious group, Krch et al. (2013) developed the Assessim Office, a virtual office that aims to evaluate executive functioning from the performance on tasks that simulate real word demands, such as responding to emails while ensuring that a projector remains on (selective and divided attention), decision-making on real estate offers (problem solving), printing offers (working memory), and delivering printed offers to a file box (prospective memory) (Krch et al., 2013). The VE was shown on a computer monitor with stereo desktop speakers, and navigated and interacted using a two-key mouse. In a pilot study, the authors compared the performance of seven TBI individuals with that of seven HC (and five individuals with multiple sclerosis) and explored the relationship between the performance of the neurological participants with neuropsychological measures of executive function, which included the Letter Number Sequencing and Digit Span Backwards of the WAIS-III (Wechsler, 1997b), the Color Word Test of the Delis-Kaplan Executive Function System (Delis et al., 2000), and the Wechsler Abbreviated Scale of Intelligence (WASI) (Axelrod, 2002). Overall, HC had better performance than participants with TBI, but significant differences were only detected for the correct real estate decisions. Exceptionally, participants had better performance at printing declined offers. Excellent correlations were detected between the raw score and set loss errors of the Delis-Kaplan Executive Function System and the incorrect prints (r = −0.889, p = 0.044; and r = −0.913, p = 0.030, respectively) and projector light misses (r = 0.947, p = 0.014; and r = 0.973, p = 0.005, respectively). The color word inhibition and inhibition/switching scores of the same test had excellent correlations with the number of emails correctly replied (r = −0.900, p = 0.037) and offers delivered (r = 0.894, p = 0.041) (Krch et al., 2013). Matheis et al. (2007) had also previously developed a VR-based office to assess learning and memory in TBI. The authors compared 20 TBI participants with 20 HC on their ability to learn and recall 16 target items presented within a VR-based office environment through a 5th Dimension Technologies 800 Series HMD. Besides the VR-based learning and memory task initial acquisition within 12 learning trials, short-term recall (30 min.) and long-term recall (24 h) performance, outcome measures consisted in a complete battery of neuropsychological measures, and the modified Simulator Sickness Questionnaire (m-SSQ) (Kennedy et al., 1993). The following neuropsychological measures were applied: the WASI (Axelrod, 2002); the digit span and digit symbol-coding subtests from the WAIS-III (Wechsler, 1997b); the TMT Part A and B (Reitan, 1958); the WCST (Heaton, 1981); the Boston Naming Test (BNT) (Kaplan et al., n.d.); the Hooper Visual Organization Test (Hooper, 1983); the COWAT (Spreen and Strauss, 1998); the CVLT (Delis et al., 1988); and the Brief Visuospatial Memory Test-Revised (Benedict et al., 1996). The results indicated that VR memory testing accurately distinguished the TBI group from controls. Additionally, non-memory-impaired TBI participants acquired targets at the same rate as HC participants. Finally, the authors found a significant relationship between the VR Office and a standard neuropsychological measure of memory, suggesting the construct validity of the task (Matheis et al., 2007).

Also for executive functions assessment, Gilboa et al. (2017) developed the Jansari assessment of Executive Functions for Children (JEF-C) and tested its feasibility and validity in children and adolescents with ABI. In the JEF-C the participant has to plan, set up and run their birthday party through the completion of tasks. The party takes place in a virtual home with three rooms, the kitchen, the living room and the DVD/games room. The participant can move freely around with the computer mouse. Realistic tasks that could happen in a birthday party have been created in order to ecologically tackle eight constructs: planning; prioritization; selective, adaptive and creative thinking and; action, time and event-based prospective memory. All participants (29 ABI patients from 10 to 18 years +30 age-and gender-matched controls) performed the JEF-C, the WASI and the BADS for Children (BADS-C), while parents completed the Behavior Rating Inventory of Executive Function questionnaire. The JEF-C task proved feasible for ABI children and adolescents. The internal consistency was medium (Cronbach’s alpha = 0.62 and significant intercorrelations between individual JEF-C constructs). Patients performed significantly worse than controls on most of the JEF-C subscales and total score, with 41.4% of participants with ABI classified as having severe executive dysfunction. No significant correlations were found between JEF-C total score, the BRIEF indices, and the BADS-C. Significant correlations were found between JEF-C and demographic characteristics of the sample and intellectual ability, but not with severity/medical variables. JEF-C appears to be a sensitive and ecologically valid assessment tool, especially for relatively high-functioning individuals (Gilboa et al., 2019).

With a wider goal of cognitive assessment and rehabilitation, Fong et al. (2010) developed a virtual replica of an ATM to train three common tasks: cash withdrawal, money transfer, and electronic payment (Fong et al., 2010). In a first experiment, the authors compared the sensitivity and specificity of the cash withdrawals and money transfers tasks. A sample of nine stroke and five TBI participants performed the tasks in the real and the virtual ATMs. The sensitivity of the virtual replica was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83 and 75%, respectively. In a second experiment, nine participants with stroke and a participant with a brain tumor were assigned in matched pairs to either a VR-based training with the virtual ATM or a computer-assisted instruction teaching program with multimedia tutorials with feedback and verbal reinforcement for six 1-h sessions over a three-week period. Participants’ general cognitive condition was also assessed with the Neurobehavioral Cognitive Status Examination (Chan et al., 2002). Results showed better performance of participants who trained with the virtual ATM in reaction time (p = 0.021) and score (p = 0.043) in the virtual cash withdrawal task after treatment, but not in the money transfer task (Fong et al., 2010).

Gerber et al. (2014) simulated some interactive virtual tasks that were displayed in a computer monitor and interacted through the haptic device Phantom® Omni™ (3D Systems, CA, USA) (Gerber et al., 2014). Tasks included removing tools from a workbench, composing 3-letter words, preparing a sandwich, and hammering nails. Nineteen individuals with TBI were enrolled in usability study and interacted with each task three times for a maximum of 5 min (2 min in the word composing task) and were assessed with the Boredom Propensity Scale (Farmer and Sundberg, 1986), the Purdue Pegboard Test (Mathiowetz et al., 1985), the Neurobehavioral Symptom Inventory (Cicerone and Kalmar, 1995), and the Wolf Motor Function Test (Wolf et al., 2001). Moderate correlations were detected between clinical scales and scores of the third iteration of the tasks. Specifically, the time to complete the workbench clearance and the hammering task correlated with the Purdue Pegboard Test (r = −0.652, p = 0.016; and r = −0.598, p = 0.014, respectively), and the number of words completed correlated with the Neurobehavioral Symptom Inventory (r = −0.494, p = 0.052). In addition, according to the Boredom Proneness Scale, all participants were highly engaged in the interaction (Gerber et al., 2014).

Focusing on upper-limbs training, Fluet et al. (2013) involved 30 individuals with stroke in a study to compare the effectiveness of a virtually simulated program of repetitive task practice with a comparable program of conventionally presented activities (Fluet et al., 2013). The VE simulated real life activities such as reaching items, placing cups, hammering, and playing piano, with other fictional narratives. Visual feedback was provided using a monitor and interaction was facilitated through a CyberGlove (Immersion, USA), an instrumented glove for measuring finger angles, which was equipped with a CyberGrasp (Immersion, USA) that provided haptic feedback, and a Haptic MASTER (Moog NCS, The Netherlands), a force controlled robot with three degrees of freedom. Conventionally presented activities included reaching items, writing, keyboarding, cooking, dressing, etc. Participants performed one of the two programs for eight 3-h sessions in a two-week period and were assessed before and after the intervention, and 3–6 months after with the Upper Extremity subscale of the FMA (Fugl-Meyer et al., 1975), the Wolf Motor Function Test, and the Jebsen Test of Hand Function. Both groups evidenced improvements with time in the three scales that reached statistical significance, which was not specified in the text. However, no statistically significant differences between groups were detected at any of the three measurement times.

Also for upper-limbs rehabilitation, Qiu et al. (2020) developed the Home based Virtual Rehabilitation System (HoVRS) and performed a feasibility study with 15 chronic stroke participants. HoVRS was placed in participants’ homes that were asked to use the system at least 15 min every weekday for 3 months (12 weeks) with limited technical support and remote clinical monitoring. The intervention included a subset of 5 games (Maze, Wrist Flying, Finger Flying, Car, Fruit Catch) out of the HoVRS 12-game library, at least one from type of movement category (Elbow-Shoulder, Wrist, Hand, Whole Arm). Each weekday, subjects were encouraged to play at least 3 rehabilitation activities for a minimum of 15 min. Participants were assessed pre and post intervention with the Upper-Extremity FMA. In addition, six-hand and arm kinematics were measured using testing games and subsequently analyzed. Participants were able to complete the study without any adverse events and spent on average 13.5 h using the system. At post intervention participants demonstrated a mean increase of 5.2 on the FMA and improved in six measurements of hand kinematics. Additionally, a combination of these kinematic measures was able to predict a substantial portion of the variability in the subjects’ UEFMA score (Qiu et al., 2020).

Cho and Lee (2019) investigated the impact of VR immersive training with computerized cognitive training with the RehaCom (Schuhfried, 1996) on the cognitive function and ADLs in acute stroke patients. The patients were randomly divided into the experimental (n = 21) and control (n = 21) group. The experimental group performed VR training with a HMD with computerized cognitive therapy (RehaCom), and the control group performed computerized cognitive therapy (RehaCom). The VR training consisted in Fishing and Picture Matching tasks, in the first the user had to catch fish using upper extremities, and in the second, the user had to flip cards and find a match, the initial screen had 8 cards and the user could turn or look back to see all the cards. All participants trained for 30 min a day 5 times a week and the intervention lasted 4 weeks. To evaluate the improvement in each group, pre-post-test evaluation was conducted using the Loewenstein Occupational Therapy Cognitive Assessment (LOTCA) (Katz et al., 1989), the Computerized Neurocognitive Function Test (CNT) (Kwon et al., 2002), and the FIM (Grimby et al., 1996). For changes before and after the intervention in the CNT, the experimental group was significantly superior in the Auditory Continuous Performance Test (ACPT) from the CNT (experimental = 10.24, control = 3.29, p = 0.01), in the VRT (experimental = 1.76, control = 0.76, p < 0.00) and in VRT-recall (experimental = 1.81, control = 0.81, p < 0.00). For FIM total motor function, the experimental group was also superior (experimental = 19.19, control = 9.43, p < 0.00). The experimental group showed significant improvements in all LOTCA, CNT and FIM items from pre to post intervention (Cho and Lee, 2019).

Finally, in a leisure approach, Lorentz et al. (2021) conceptualized the VR Traveller, a training program of attentional functions in a more engaging setting that also resembles real-life activities. The program offers several modules to address specific attentional dysfunctions within the context of a virtual journey around the world. Each scenario is set in a different location and addresses alertness, selective attention, visual scanning and working memory. The VR Traveller training program was completed by 35 patients with ABI in a 20–30 min session during inpatient neurorehabilitation. Feasibility and acceptability were assessed with the user experience questionnaire (UEQ) (Laugwitz et al., 2008) and a self-constructed feasibility questionnaire, and tolerability was assessed with the virtual reality sickness questionnaire (VRSQ) (Kim et al., 2018). Overall, patients’ ratings of the VR training in terms of acceptability and feasibility were positive, suggesting that VR programs represent an accepted, feasible, and well-received alternative to traditional cognitive rehabilitation approaches (Lorentz et al., 2021).

4. Discussion

VR technologies have evolved from rudimentary systems in the 1960s to sophisticated immersive environments in the last decades. VR has been increasingly used as a tool for neuropsychological assessment and rehabilitation. The purposefulness of this review was to provide an overview about the use of ecologically valid virtual environments and related technologies to assess and rehabilitate people with ABI. In this section we discuss our findings according to the objectives stated in the Introduction.

4.1. What are the most common virtual environments used in acquired brain injury assessment and rehabilitation?

With this revision we have identified the main daily life environments and tasks that are simulated through VR. Overall, we have considered 70 studies, in which 12 were simulations of kitchens, 11 supermarkets, 10 shopping malls, 16 streets, 11 cities and 10 other everyday life scenarios.

In light of the existing studies, we have concluded that virtual kitchens may have the potential to discriminate between healthy and pathological performance and to simulate meal and hot drink preparation tasks with moderate correlations to real life performance (Zhang et al., 2003; Edmans et al., 2006; Besnard et al., 2016; Triandafilou et al., 2018; Thielbar et al., 2020) and neuropsychological (Besnard et al., 2016) and motor tests (Adams et al., 2015; Huang et al., 2017).

Considering the supermarket context, different studies have shown moderate correlations between the performance of HC and post-stroke participants in virtual supermarkets with neuropsychological tests (Josman et al., 2006, 2014; Raspelli et al., 2012; Yip and Man, 2013; Sorita et al., 2014; Cogné et al., 2018). Moderate correlations were also evidenced between the performance in real and virtual shopping tasks (Yip and Man, 2013). Performance in virtual supermarkets has been also shown to successfully differentiate between healthy and post-stroke participants (Kang et al., 2008; Raspelli et al., 2012; Cogné et al., 2018; Ogourtsova et al., 2018). Moreover, virtual supermarkets have also been effectively used to train upper limb motor function with similar effectiveness to conventional therapy, although effectiveness of the intervention is expected to rely on the motor task rather than the environment (Yin et al., 2014; Demers and Levin, 2020). Finally, a VR kitchen was presented in a computer monitor and a HMD and both stroke patients and HC reported an enhanced feeling of engagement, transportation, flow, and presence in the HMD condition (Spreij et al., 2020b).

Also within a shopping task perspective, virtual malls are widely used to assess and rehabilitate ABI patients. Existing literature shows a general sensitivity of the virtual tasks recreated in virtual malls to differentiate between healthy individuals and individuals with stroke (Rand et al., 2007, 2009a; Hadad et al., 2012; Okahashi et al., 2013; Nir-Hadad et al., 2015) and TBI (Canty et al., 2014), and between young and older adults (Rand et al., 2009a; Okahashi et al., 2013). Virtual tasks also showed moderate to high correlations with clinical tests in stroke (Nir-Hadad et al., 2015) and TBI (Erez et al., 2013; Okahashi et al., 2013; Canty et al., 2014), and more importantly, moderate correlations with performance in mockups (Hadad et al., 2012; Nir-Hadad et al., 2015) and in real world scenarios (Rand et al., 2009b; Hadad et al., 2012; Nir-Hadad et al., 2015). Interventions with VEs recreating virtual malls showed preliminary effectiveness at improving upper limb function (Rand et al., 2009a), as well as multitasking (Rand et al., 2009c), and other executive functions (Jacoby et al., 2013).

Street crossing and driving are demanding tasks that are commonly impaired after brain injury. Studies suggest worse performance of individuals with ABI as both virtual drivers (Liu et al., 1999) and pedestrians aiming to cross the street (Naveh et al., 2000; Weiss et al., 2003; Navarro et al., 2013), with increased difficulties in presence of USN (Navarro et al., 2013), and to remember a route (Titov and Knight, 2005) in comparison to healthy subjects. Street crossing has shown concurrent validity with standardized neuropsychological tests (Navarro et al., 2013), which was not replicated with virtual driving (Wald et al., 2000). Moreover, virtual street crossing, driving, and walking can be used to improve neglect, the ability to drive, and route retaining, with comparable efficacy to visual scanning tasks (Katz et al., 2005), driving-related tasks (Akinwuntan et al., 2005), or walking in the real world (Sorita et al., 2013), respectively. However, training in more ecological conditions could provide increased visual and anticipation abilities (Devos et al., 2009), which could have a positive transference to driving in the real world (Akinwuntan et al., 2005), and to episodic memory (Sorita et al., 2013), working memory and selective attention (Ettenhofer et al., 2019). In terms of assessment, VR street crossing systems are potentially useful to differentiate USN and non-USN individuals (Spreij et al., 2020a; Wagner et al., 2021).

Virtual cities allow for a diversity of everyday life tasks simulations. The here presented studies showed convergent validity between some measures of the performance in simulated virtual cities and clinical neuropsychological tests of variable strength, which ranges from moderate for attention (Jovanovski et al., 2012), good to excellent for executive functioning (Jovanovski et al., 2012), and excellent for general cognitive condition (Vourvopoulos et al., 2014; Oliveira et al., 2020) and even mood (Vourvopoulos et al., 2014). Poor to moderate correlations has been also reported between navigation in a real and a virtual city (Claessen et al., 2016). Training in virtual cities has been also shown to improve processing speed, flexibility, and calculation after TBI (Gamito et al., 2011a), and attention and memory after stroke (Gamito et al., 2011b, 2014). Remarkably, effectiveness of VR-based training in virtual cities after stroke has been also reported not only in comparison to no intervention, the former providing greater improvements in attention and memory (Gamito et al., 2015), but also in comparison to matched conventional occupational therapy, the former providing greater improvements in general cognitive condition, attention and fluency (Faria et al., 2016). Even when compared to a time-matched content equivalent paper-and-pencil training, a more ecological valid training with a virtual city revealed higher effectiveness with improvements in different cognitive domains and self-perceived cognitive deficits in everyday life (Faria et al., 2020). It should be also highlighted that similar improvements have been provided with immersive and non-immersive displays, showing little effect of the enabling technology (Faria et al., 2019, 2020).

Ultimately, simulation of tasks in other everyday life environments, such as virtual libraries, offices, ATMs, workbenches, virtual travelling etc., have been reported to have certain sensitivity to impairments after TBI in memory (Renison et al., 2012) and executive functions (Renison et al., 2012; Krch et al., 2013), and also good sensitivity and specificity to predict performance in simples tasks in the real world (Fong et al., 2010). Convergent validity between the performance in the virtual tasks has been reported to be excellent with clinical measures of executive function (Krch et al., 2013; Gilboa et al., 2019), general cognitive functioning (Cho and Lee, 2019), moderate with neurobehavioral symptoms (Gerber et al., 2014), and good with measures of hand dexterity (Gerber et al., 2014). In addition, performance in the virtual world has also shown moderate to good convergent validity with some measures of memory and executive function during performance in the real world (Renison et al., 2012). Finally, training with VE simulating other environments has been shown to be specific, thus improving the performance in the virtual task in comparison to a computer-assisted instruction teaching program (Fong et al., 2010) and provide comparable improvements on upper-limb motor function assessed in comparison to conventionally presented activities (Fluet et al., 2013; Qiu et al., 2020). Finally, it is important to highlight that leisure simulations, such as travelling, are also promising for assessment and rehabilitation of ABI (Lorentz et al., 2021).

4.2. Which technologies are used for presentation and interaction In these environments?

The VEs that were presented in this review were mostly presented in computer screens (26 studies), HMD’s (16 studies), laptops (six studies) and wall projections (two studies), and patients interacted with them primarily via mouse (19 studies), keyboard (15 studies), joystick (nine studies), GestureTek (6 studies) Kinect (six studies) and touchscreen (five studies).

4.3. How are these virtual environments being clinically validated about their impact in ABI assessment and rehabilitation?

According to this review, a great number of ecologically valid simulations of daily-life tasks mostly target cognitive domains, such as general executive functions (25 studies), attention (18 studies), memory (10 studies) and general cognition (eight studies). Only 11 studies focused in motor aspects assessment and rehabilitation.

Numerous studies have been carried out to clinically validate these VR-based assessment and rehabilitation environments, from case studies to RCTs. Evidence stills modest, and further research with more extensive and homogeneous samples is needed. In 70 studies 45 had an experimental design and 25 a non-experimental design.

There is also a huge need of major uniformity of the neuropsychological and motor tests that are used in these studies to strengthen conclusions and allow comparison between studies. In the universe of 70 studies, 136 different outcome measures are used for the following main domains: cognitive, functional, motor, emotion, cybersickness, immersion and engagement. The 10 most used instruments and questionnaires were: the WAIS-III for general cognition (13 studies), the MMSE cognitive screening (13 studies), the BADS for executive functions (10 studies), the TMT Part A and B for attention, processing speed and working Memory (eight studies), WMS-III for general memory (eight studies), WCST also for executive functions (seven studies), the WAIS Digit Span for Memory and Working Memory (seven studies), the FIM for general functionality (six studies), the FMA for motor functions (seven studies) and the BDI-II for depressive symptomatology (five studies).

Although higher levels of engagement is allegedly one of the VR tools advantages, only one study has assessed it (O’Brien, 2007). Additionally, only a marginal number of studies (nine) assessed presence and immersion, the most used outcome measure in this domain (four studies) was the SFQ (Witmer and Singer, 1998).

4.4. Implications for clinical practice

One of the unresolved issues that must be addressed is the suitability of particular VR platforms in relation to the therapeutic goals one wishes to achieve. In total, 12 different self-report functional assessment scales were used as outcome measures, demonstrating a likely direct transfer from the performance in ecologically valid VR tasks to everyday life tasks routines. Health professionals should consider this transfer effect when choosing a test for assessing the capacity of driving or training tasks that address rehabilitation (Krasny-Pacini et al., 2016).

One of the critical issues in VR-based neuropsychological assessment is the targeted end-user (i.e., the person who uses it as an assessment tool). As defined by the American Psychological Association (APA), researchers and clinicians “do not promote the use of psychological assessment techniques by unqualified persons, except when such use is conducted for training purposes with appropriate supervision” (Campbell et al., 2010), Ethical Standard 9.07, Assessment by Unqualified Persons). VR instruments can be implemented by other professionals who do not have a background in neuropsychology but the results should be integrated and interpreted by a competent professional such as a neuropsychologist. As such VR-based assessment instruments should be defined to be administered by a clinician or researcher who has competency in neuropsychological assessment and warranties privacy and data security, test scoring and interpretation, and record keeping (Kourtesis et al., 2020). Even though most VR-based tools do not require the intervention of a health professional, their supervision and guidance are essential not only in the assessment but also the recovery process (Bruno et al., 2022).

The time and frequency of interaction with immersive and non-immersive VR technologies is highly variable across the studies, clinical guidelines would be valuable. For instance, Kourtesis et al. (2019) findings support the viability of VR sessions with duration up to 70 min, when the participants are familiarized with VR technology and the quality of the VR software meets the parsimonious cut-offs of the Virtual Reality Neuroscience Questionnaire (VRNQ). This questionnaire was developed and validated by Kourtesis et al. (2019) to assess the quality of VR software in terms of user experience, game mechanics, in-game assistance, and cybersickness.

Finally, although there is no strong evidence that the use of VR is more beneficial than conventional therapy in ABI assessment and rehabilitation (e.g., Laver et al., 2017), this technological approach as demonstrated to be beneficial as complementary to usual care in the reviewed studies, for numerous reasons, namely: is more engaging (O’Brien, 2007); enables a more intensive training (Faria et al., 2019); provides immediate feedback (Fong et al., 2010); tasks have greater verisimilitude and validity (Nir-Hadad et al., 2015).

4.5. Implications for research

The lack of robust evidence on the clinical validity and impact of VR-based ecologically valid assessment and rehabilitation tools is mainly due to the need for studies of higher methodological quality (RCTs) and greater sample size. Moreover, in the case of rehabilitation, having an active control group is essential to confirm its clinical efficacy. In this review, only Faria et al. (2020) compared a VR-based intervention with its equivalent conventional approach (paper-and-pencil), although with a small sample size.

In terms of future research directions, it is important to mention the work from Kourtesis and colleagues who developed the Virtual Reality Everyday Assessment Lab (VR-EAL) (Kourtesis et al., 2019), the first immersive and ecologically valid VR neuropsychological battery, developed to meet the criteria of the National Academy of Neuropsychology (NAN) and American Academy of Clinical Neuropsychology (AACN) for Computerized Neuropsychological Assessment Devices (CNADs). The AACN and NAN recognize the potential advantages of CNADs as testing large numbers of individuals quickly (e.g., parallel administration); immediately available tests; enhanced accuracy and precision (e.g., reaction time measurements); shorter administration time and reduced costs (e.g., for test administration and scoring); adaptable in different languages; exporting the data automatically (e.g., for research purposes); increased accessibility (e.g., remotely); and the integration of algorithms for making decisions on issues such as the identification of an impairment or a statistically reliable change (Bauer et al., 2012). Since it was developed, the VR-EAL has already shown that it achieves several of these benefits. The VR-EAL is immediately available after its installation on a personal computer and automatically produces accurate performance scores, it has no costs for administration and scoring, and it requires a shorter administration time as compared to its equivalent paper-and-pencil counterparts. The VR-EAL is only missing a predictive algorithm for identifying cognitive impairment, since it has not been validated with any clinical population yet (Kourtesis and MacPherson, 2021). The work from these authors is an example of good practices and is, as such, an important reference for future studies in this field.

Another relevant topic for future research is the impact of the levels of immersiveness. The VEs that were explored in this review were mainly presented, in an ascending degree of immersiveness: laptops (six studies), computer screens (26 studies), wall projections (two studies) and HMDs (16 studies). Understanding how the level of immersion influences the effectiveness of the therapy or assessment will provide valuable insights and more nuanced recommendations for practice (Vasser and Aru, 2020). In this review only three studies measured presence and immersion, namely with the Presence Questionnaire (Witmer and Singer, 1998), the Igroup Presence Questionnaire (igroup.org – project consortium; igroup presence questionnaire (IPQ) overview | igroup.org – project consortium; IPQ, n.d.) and the Immersive Tendencies Questionnaire (Witmer and Singer, 1998).

Cybersickness is a significant side effect associated with the use of VR and arises from a conflict between visual, vestibular, and proprioceptive systems (Kourtesis et al., 2019). According to the literature, researchers and clinicians do not quantitatively assess cybersickness despite its impact on cognitive performance (Nesbitt et al., 2017; Arafat et al., 2018; Mittelstaedt et al., 2019). Indeed, in the universe of 70 studies only three have assessed cybersickness with objective measures, namely the Virtual Reality Sickness Questionnaire (Kim et al., 2018) and the Simulator Sickness Questionnaire (Kennedy et al., 1993). Understanding how to avoid or mitigate these symptoms is crucial for the safe and effective use of VR in neuropsychological assessment and rehabilitation.

Finally, evidence would be enriched if future studies in this field incorporate neuroimaging to explore neuroplasticity changes that positively affect the participants’ cognition, humor and functionality. Given the specificity of some VR-based scenarios (like kitchens and streets) and the multi-domains approach of others (like malls and cities), what is the better approach for improving neuroplasticity mechanisms?

5. Conclusion

In the last decade, VR technologies have been offering numerous possibilities for the design and development of sophisticated simulations of ADL’s that are clinically interesting, especially in the assessment and rehabilitation of ABI cognitive and functional impairments. Everyday life tasks, which would be difficult if not impossible to assess or train, using traditional neuropsychological methods, are now enabled by the use of VR technologies.

In this review, we have found that kitchens, supermarkets, shopping malls, streets, and cities are the most used scenarios for ecologically valid simulations. These are mostly presented in computer screens, HMD’s and laptops, and patients interact with them mostly via mouse, keyboard, joystick, GestureTek and Kinect.

From case studies to RCT’s, numerous studies have been carried out to clinically validate these VR-based assessment and rehabilitation environments. However, evidence stills modest and further research with bigger samples is needed. Also, a major uniformity of the traditional neuropsychological tests that are used would strengthen conclusions between different studies. With empirical studies showing greater effectiveness, these ecologically valid VR-based technologies could be of significant value for better measuring and treating ABI impairments.

5.1. Limitations

Based on the studies reviewed, ecologically valid VR-based simulations of ADLs offer promising advantages as ABI assessment and rehabilitation. However, it is important to refer some limitations that can affect the quality and generalizability of the findings:

1. Determining which studies to include in a literature review involves subjective decisions. Although inclusion and exclusion criteria were defined, researchers might apply them differently. In our particular case, deciding whether a VR ADLs simulation was ecologically valid involved some subjectivity. As such, there were discussion meetings among all authors to make a consensual decision.

2. Concerning the comparison of results across studies, the variability in the number and frequency of sessions and the panoply of outcome measures that were used make it difficult to analyze and generalize the results.

3. The studies reviewed aimed to include only individuals with ABI, however some studies included different diagnoses, such as MCI (Vourvopoulos et al., 2014) and MS (Krch et al., 2013), which are degenerative and might have affected the results.

5.2. Future work

As stated in the “Implications for Research” section, in order to perform a meta-analysis about the efficacy of these tools, it would be important to have guidelines about the number and frequency of sessions (for the intervention studies) and outcome measures to be used in future studies.

Additionally, the field of VR tools for assessment and rehabilitation is continually evolving, with new studies published regularly. The current review only includes articles until the end of 2021. The authors have faced a challenge in analyzing the studies and writing the manuscript while keeping the literature review up to date. It would be important to follow-up on this topic in a future review.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

RL proposed the review topic. RL, MSC, and SBB drafted the protocol. ALF and JL performed literature search and conducted the data extraction and analysis. ALF and JL wrote the manuscript and MSC, SBB, and RL reviewed it. All authors approved the manuscript submission.

Funding

This work is supported by Fundação para a Ciência e Tecnologia through NOVA LINCS (UIDB/04516/2020); MACbioIDi2: Promoting the cohesion of Macaronesian regions through a common ICT platform for biomedical R & D & i (INTERREG program MAC2/1.1b/352); Ministerio de Ciencia y Educación of Spain (RTC2019-006933-7); and Conselleria d’Innovació, Universitats, Ciència i Societat Digital of the Generalitat Valenciana (CIDEXG/2022/15).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Adams, R. J., Lichter, M. D., Krepkovich, E. T., Ellington, A., White, M., and Diamond, P. T. (2015). Assessing upper extremity motor function in practice of virtual activities of daily living. IEEE Trans. Neural Syst. Rehabil. Eng. 23, 287–296. doi: 10.1109/TNSRE.2014.2360149

CrossRef Full Text | Google Scholar

Agrewal, S., Simon, A. M. D., Bech, S., Bærentsen, K. B., and Forchammer, S. (2020). Defining immersion: literature review and implications for research on audiovisual experiences. J. Audio Eng. Soc. 68, 404–417. doi: 10.17743/jaes.2020.0039

CrossRef Full Text | Google Scholar

Akinwuntan, A. E., De Weerdt, W., Feys, H., Pauwels, J., Baten, G., Arno, P., et al. (2005). Effect of simulator training on driving after stroke: a randomized controlled trial. Neurology 65, 843–850. doi: 10.1212/01.wnl.0000171749.71919.fa

CrossRef Full Text | Google Scholar

Alderman, N., Burgess, P. W., Knight, C., and Henman, C. (2003). Ecological validity of a simplified version of the multiple errands shopping test. J. Int. Neuropsychol. Soc. 9, 31–44. doi: 10.1017/S1355617703910046

CrossRef Full Text | Google Scholar

Aminov, A., Rogers, J. M., Middleton, S., Caeyenberghs, K., and Wilson, P. H. (2018). What do randomized controlled trials say about virtual rehabilitation in stroke? A systematic literature review and meta-analysis of upper-limb and cognitive outcomes. J. Neuro Engin. Rehabil. 15:29. doi: 10.1186/s12984-018-0370-2

CrossRef Full Text | Google Scholar

Arafat, I. M., Ferdous, S. M. S., and Quarles, J. (2018). Cybersickness-provoking virtual reality alters brain signals of persons with multiple sclerosis. In 2018 IEEE conference on virtual reality and 3D user interfaces (VR) (IEEE), 1–120.

Google Scholar

Axelrod, B. N. (2002). Validity of the Wechsler abbreviated scale of intelligence and other very short forms of estimating intellectual functioning. Assessment 9, 17–23. doi: 10.1177/1073191102009001003

CrossRef Full Text | Google Scholar

Azouvi, P. (2017). The ecological assessment of unilateral neglect. Ann. Phys. Rehabil. Med. 60, 186–190. doi: 10.1016/j.rehab.2015.12.005

CrossRef Full Text | Google Scholar

Banaji, M. R., and Crowder, R. G. (1989). The bankruptcy of everyday memory. Am. Psychol. 44, 1185–1193. doi: 10.1037/0003-066X.44.9.1185

CrossRef Full Text | Google Scholar

Bauer, R. M., Iverson, G. L., Cernich, A. N., Binder, L. M., Ruff, R. M., and Naugle, R. I. (2012). Computerized neuropsychological assessment devices: joint position paper of the American Academy of clinical neuropsychology and the National Academy of neuropsychology. Arch. Clin. Neuropsychol. 27, 362–373. doi: 10.1093/arclin/acs027

CrossRef Full Text | Google Scholar

Baum, C. M., Morrison, T., Hahn, M., and Edwards, D. F. (2009). Executive function performance test: Test protocol booklet Program in Occupational Therapy Washington University School of Medicine.

Google Scholar

Baylor University Medical Center (n.d.). Adult visual perceptual assessment. (unpublished statistical data).

Google Scholar

Beck, L. H., Bransome, E. D., Mirsky, A. F., Rosvold, H. E., and Sarason, I. (1956). A continuous performance test of brain damage. J. Consult. Psychol. 20, 343–350. doi: 10.1037/h0043220

PubMed Abstract | CrossRef Full Text | Google Scholar

Bell, I. H., Nicholas, J., Alvarez-Jimenez, M., Thompson, A., and Valmaggia, L. (2022). Virtual reality as a clinical tool in mental health research and practice. Dialogues Clin. Neurosci. 22, 169–177. doi: 10.31887/DCNS.2020.22.2/lvalmaggia

CrossRef Full Text | Google Scholar

Benedict, R. H., Schretlen, D., Groninger, L., Dobraski, M., and Shpritz, B. (1996). Revision of the brief visuospatial memory test: studies of normal performance, reliability, and validity. Psychol. Assess. 8:145.

Google Scholar

Benton, A. L. (1968). Differential behavioral effects in frontal lobe disease. Neuropsychologia 6, 53–60. doi: 10.1016/0028-3932(68)90038-9

CrossRef Full Text | Google Scholar

Benton, A. I., and Hamsher, K. (1989). Multilingual aphasia examination AJA Associates.

Google Scholar

Benton, A., Hannay, H. J., and Varney, N. R. (1975). Visual perception of line direction in patients with unilateral brain disease. Neurology 25, 907–910.

Google Scholar

Berg, E. A. (1948). A simple objective technique for measuring flexibility in thinking. J. Gen. Psychol. 39, 15–22. doi: 10.1080/00221309.1948.9918159

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernhardt, J., Borschmann, K. N., Kwakkel, G., Burridge, J. H., Eng, J. J., Walker, M. F., et al. (2019). Setting the scene for the second stroke recovery and rehabilitation roundtable. Int. J. Stroke 14, 450–456. doi: 10.1177/1747493019851287

CrossRef Full Text | Google Scholar

Besnard, J., Richard, P., Banville, F., Nolin, P., Aubin, G., Le Gall, D., et al. (2016). Virtual reality and neuropsychological assessment: the reliability of a virtual kitchen to assess daily-life activities in victims of traumatic brain injury. Appl. Neuropsychol. Adult 23, 223–235. doi: 10.1080/23279095.2015.1048514

CrossRef Full Text | Google Scholar

Bohil, C. J., Alicea, B., and Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12, 752–762. doi: 10.1038/nrn3122

CrossRef Full Text | Google Scholar

Borg, G. (1990). Psychophysical scaling with applications in physical work and the perception of exertion. Scand. J. Work Environ. Health 16, 55–58.

Google Scholar

Broeks, J. G., Lankhorst, G. J., Rumping, K., and Prevo, A. J. (1999). The long-term outcome of arm function after stroke: results of a follow-up study. Disabil. Rehabil. 21, 357–364.

Google Scholar

Bruno, R. R., Wolff, G., Wernly, B., Masyuk, M., Piayda, K., Leaver, S., et al. (2022). Virtual and augmented reality in critical care medicine: the patient’s, clinician’s, and researcher’s perspective. Crit. Care 26:326. doi: 10.1186/s13054-022-04202-x

CrossRef Full Text | Google Scholar

Burgess, P. W., and Shallice, T. (1996). The Hayling and Brixton tests: Test manual Themes Valley Test Company Limited.

Google Scholar

Canty, A. L., Fleming, J., Patterson, F., Green, H. J., Man, D., and Shum, D. H. K. (2014). Evaluation of a virtual reality prospective memory task for use with individuals with severe traumatic brain injury. Neuropsychol. Rehabil. 24, 238–265. doi: 10.1080/09602011.2014.881746

CrossRef Full Text | Google Scholar

Cao, X., Douguet, A.-S., Fuchs, P., and Klinger, E. (2010). Designing an ecological virtual task in the context of executive functions: Preliminary study. Available at: https://hal-mines-paristech.archives-ouvertes.fr/hal-00785344 (Accessed December 13, 2016).

Google Scholar

Campbell, L., Vasquez, M., Behnke, S., and Kinscherff, R. (2010). APA Ethics Code commentary and case illustrations American Psychological Association.

Google Scholar

Chan, A. S., and Kwok, I. C. (2009). Hong Kong list learning test, manual & preliminary norm. Hong Kong: Easy Press Printing.

Google Scholar

Chan, C. C. H., Lee, T. M. C., Fong, K. N. K., Lee, C., and Wong, V. (2002). Cognitive profile for Chinese patients with stroke. Brain Inj. 16, 873–884. doi: 10.1080/02699050210131975

CrossRef Full Text | Google Scholar

Cho, D. R., and Lee, S. H. (2019). Effects of virtual reality immersive training with computerized cognitive training on cognitive function and activities of daily living performance in patients with acute stage stroke: a preliminary randomized controlled trial. Med. 98:e14752. doi: 10.1097/MD.0000000000014752

PubMed Abstract | CrossRef Full Text | Google Scholar

Cicerone, K. D., and Kalmar, K. (1995). Persistent postconcussion syndrome: the structure of subjective complaints after mild traumatic brain injury. J. Head Trauma Rehabil. 10, 1–17. doi: 10.1097/00001199-199510030-00002

CrossRef Full Text | Google Scholar

Claessen, M. H. G., Visser-Meily, J. M. A., de Rooij, N. K., Postma, A., and van der Ham, I. J. M. (2016). A direct comparison of real-world and virtual navigation performance in chronic stroke patients. J. Int. Neuropsychol. Soc. 22, 467–477. doi: 10.1017/S1355617715001228

CrossRef Full Text | Google Scholar

Cogné, M., Violleau, M.-H., Klinger, E., and Joseph, P.-A. (2018). Influence of non-contextual auditory stimuli on navigation in a virtual reality context involving executive functions among patients after stroke. Ann. Phys. Rehabil. Med. 61, 372–379. doi: 10.1016/j.rehab.2018.01.002

CrossRef Full Text | Google Scholar

Conners, C. K., Epstein, J. N., Angold, A., and Klaric, J. (2003). Continuous performance test performance in a normative epidemiological sample. J. Abnorm. Child Psychol. 31, 555–562. doi: 10.1023/A:1025457300409

CrossRef Full Text | Google Scholar

Corti, C., Oprandi, M. C., Chevignard, M., Jansari, A., Oldrati, V., Ferrari, E., et al. (2022). Virtual-reality performance-based assessment of cognitive functions in adult patients with acquired brain injury: a scoping review. Neuropsychol. Rev. 32, 352–399. doi: 10.1007/s11065-021-09498-0

CrossRef Full Text | Google Scholar

D’Elia, L. F., Satz, P., Uchiyama, C. L., and White, T. (1994). Color trails test. Odessa, FL: PAR.

Google Scholar

Delis, D. C., Freeland, J., Kramer, J. H., and Kaplan, E. (1988). Integrating clinical assessment with cognitive neuroscience: construct validation of the California verbal learning test. J. Consult. Clin. Psychol. 56, 123–130. doi: 10.1037/0022-006X.56.1.123

PubMed Abstract | CrossRef Full Text | Google Scholar

Delis, D. C., Kaplan, E., and Kramer, J. H. (2000). Delis-Kaplan executive function system The Psychological Corporation.

Google Scholar

Delis, D. C., Kaplan, E., and Kramer, J. H. (2001). Delis-Kaplan Executive Function System (D–KEFS) [Database record]. APA PsycTests. doi: 10.1037/t15082-000

CrossRef Full Text | Google Scholar

Demers, M., and Levin, M. F. (2020). Kinematic validity of reaching in a 2D virtual environment for arm rehabilitation after stroke. IEEE Trans. Neural Syst. Rehabil. Eng. 28, 679–686. doi: 10.1109/TNSRE.2020.2971862

CrossRef Full Text | Google Scholar

Devos, H., Akinwuntan, A. E., Nieuwboer, A., Tant, M., Truijen, S., De Wit, L., et al. (2009). Comparison of the effect of two driving retraining programs on on-road performance after stroke. Neur. Neu. Rep. 23, 699–705. doi: 10.1177/1545968309334208

CrossRef Full Text | Google Scholar

Diehl, M., Marsiske, M., Horgas, A. L., Rosenberg, A., Saczynski, J. S., and Willis, S. L. (2005). The revised observed tasks of daily living: a performance-based assessment of everyday problem solving in older adults. J. Appl. Gerontol. 24, 211–230. doi: 10.1177/0733464804273772

CrossRef Full Text | Google Scholar

Diener, E., Emmons, R. A., Larsen, R. J., and Griffin, S. (1985). The satisfaction with life scale. J. Pers. Assess. 49, 71–75. doi: 10.1207/s15327752jpa4901_13

CrossRef Full Text | Google Scholar

Driving assessment and training method and apparatus (2014). Available at: https://patents.google.com/patent/US20150104757A1/en (Accessed February 11, 2022).

Google Scholar

Dubois, B., Slachevsky, A., Litvan, I., and Pillon, B. (2000). The FAB: a frontal assessment battery at bedside. Neurology 55, 1621–1626. doi: 10.1212/WNL.55.11.1621

CrossRef Full Text | Google Scholar

Duncan, P. W., Bode, R. K., Min Lai, S., and Perera, S., Glycine Antagonist in Neuroprotection Americans Investigators (2003). Rasch analysis of a new stroke-specific outcome scale: the stroke impact scale. Arch. Phys. Med. Rehabil. 84, 950–963. doi: 10.1016/S0003-9993(03)00035-2

CrossRef Full Text | Google Scholar

Edmans, J. A., Gladman, J. R., Cobb, S., Sunderland, A., Pridmore, T., Hilton, D., et al. (2006). Validity of a virtual environment for stroke rehabilitation. Stroke 37, 2770–2775. doi: 10.1161/01.STR.0000245133.50935.65

CrossRef Full Text | Google Scholar

Edmans, J., Gladman, J., Hilton, D., Walker, M., Sunderland, A., Cobb, S., et al. (2009). Clinical evaluation of a non-immersive virtual environment in stroke rehabilitation. Clin. Rehabil. 23, 106–116. doi: 10.1177/0269215508095875

CrossRef Full Text | Google Scholar

Einstein, G. O., and McDaniel, M. A. (1990). Normal aging and prospective memory. J. Exp. Psychol. Learn. Mem. Cogn. 16, 717–726.

Google Scholar

Erez, N., Weiss, P. L., Kizony, R., and Rand, D. (2013). Comparing performance within a virtual supermarket of children with traumatic brain injury to typically developing children: a pilot study. OTJR: occupation. Participation Health 33, 218–227. doi: 10.3928/15394492-20130912-04

CrossRef Full Text | Google Scholar

Ettenhofer, M. L., Guise, B., Brandler, B., Bittner, K., Gimbel, S. I., Cordero, E., et al. (2019). Neurocognitive driving rehabilitation in virtual environments (neuro DRIVE): a pilot clinical trial for chronic traumatic brain injury. Neuro Rehabil 44, 531–544. doi: 10.3233/NRE-192718

CrossRef Full Text | Google Scholar

Faria, A. L., Andrade, A., Soares, L., and Badia, S. B. (2016). Benefits of virtual reality based cognitive rehabilitation through simulated activities of daily living: a randomized controlled trial with stroke patients. J. Neuro Engin. Rehabil. 13:96. doi: 10.1186/s12984-016-0204-z

CrossRef Full Text | Google Scholar

Faria, Ana Lúcia, Paulino, Teresa, and Bermúdez, Sergi i Badia (2019). Comparing adaptive cognitive training in virtual reality and paper-and-pencil in a sample of stroke patients. (Tel Aviv: IEEE).

Google Scholar

Faria, A. L., Pinho, M. S., and Badia, S. B. i (2018). Capturing expert knowledge for the personalization of cognitive rehabilitation: study combining computational modeling and a participatory design strategy. JMIR Rehabil. Assist. Technol. 5:e10714. doi: 10.2196/10714

CrossRef Full Text | Google Scholar

Faria, A. L., Pinho, M. S., and Bermúdez i Badia, S. (2020). A comparison of two personalization and adaptive cognitive rehabilitation approaches: a randomized controlled trial with chronic stroke patients. J Neuro Engin. Rehabil 17:78. doi: 10.1186/s12984-020-00691-5

CrossRef Full Text | Google Scholar

Farmer, R., and Sundberg, N. D. (1986). Boredom proneness--the development and correlates of a new scale. J. Pers. Assess. 50, 4–17. doi: 10.1207/s15327752jpa5001_2

CrossRef Full Text | Google Scholar

Fluet, G. G., Merians, A., Qiu, Q., and Adamovich, S. (2013). Sensorimotor training in virtual environments produces similar outcomes to real world training with greater efficiency. In 2013 international conference on virtual rehabilitation (ICVR), 114–118

Google Scholar

Folstein, M. F., Folstein, S. E., and McHugh, P. R. (1975). “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198. doi: 10.1016/0022-3956(75)90026-6

CrossRef Full Text | Google Scholar

Fong, K. N., Chow, K. Y., Chan, B. C., Lam, K. C., Lee, J. C., Li, T. H., et al. (2010). Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury. J. Neuro Engin. Rehabil. 7:19. doi: 10.1186/1743-0003-7-19

CrossRef Full Text | Google Scholar

Franzen, M. D., and Wilhelm, K. L. (1996). “Conceptual foundations of ecological validity in neuropsychological assessment,” in Ecological validity of neuropsychological testing. eds. R. J. Sbordone and C. J. Long (Gr Press/St Lucie Press, Inc.), 91–112.

Google Scholar

Fugl-Meyer, A. R., Jääskö, L., Leyman, I., Olsson, S., and Steglind, S. (1975). The post-stroke hemiplegic patient. 1. A method for evaluation of physical performance. Scand. J. Rehabil. Med. 7, 13–31.

Google Scholar

Gamito, P., Oliveira, J., Coelho, C., Morais, D., Lopes, P., Pacheco, J., et al. (2015). Cognitive training on stroke patients via virtual reality-based serious games. Disabil. Rehabil. 1–4. doi: 10.3109/09638288.2014.934925

CrossRef Full Text | Google Scholar

Gamito, P., Oliveira, J., Pacheco, J., Morais, D., Saraiva, T., Lacerda, R., et al. (2011a). Traumatic brain injury memory training: a virtual reality online solution. Int. J. Disabil. Hum. Develop. 10, 309–312. doi: 10.1515/IJDHD.2011.049

CrossRef Full Text | Google Scholar

Gamito, P., Oliveira, J., Pacheco, J., Santos, N., Morais, D., Saraiva, T., et al. (2011b). The contribution of a VR-based programme in cognitive rehabilitation following stroke. In 2011 international conference on virtual rehabilitation, 1–2

Google Scholar

Gamito, P., Oliveira, J., Santos, N., Pacheco, J., Morais, D., Saraiva, T., et al. (2014). Virtual exercises to promote cognitive recovery in stroke patients: the comparison between head mounted displays versus screen exposure methods. Int. J. Disabil. Hum. Develop. 13, 337–342. doi: 10.1515/ijdhd-2014-0325

CrossRef Full Text | Google Scholar

Gerber, L. H., Narber, C. G., Vishnoi, N., Johnson, S. L., Chan, L., and Duric, Z. (2014). The feasibility of using haptic devices to engage people with chronic traumatic brain injury in virtual 3D functional tasks. J. Neuro Engin. Rehabil. 11:117. doi: 10.1186/1743-0003-11-117

CrossRef Full Text | Google Scholar

Gilboa, Y., Jansari, A., Kerrouche, B., Uçak, E., Tiberghien, A., Benkhaled, O., et al. (2017). Assessment of executive functions in children and adolescents with acquired brain injury (ABI) using a novel complex multi-tasking computerised task: The Jansari assessment of Executive Functions for Children (JEF-C©). Neuropsychol. Rehabil. 29, 1359–1382. doi: 10.1080/09602011.2017.1411819

CrossRef Full Text | Google Scholar

Gilboa, Y., Jansari, A., Kerrouche, B., Uçak, E., Tiberghien, A., Benkhaled, O., et al. (2019). Assessment of executive functions in children and adolescents with acquired brain injury (ABI) using a novel complex multi-tasking computerised task: the Jansari assessment of executive functions for children (JEF-C\copyright). Neuropsychol. Rehabil. 29, 1359–1382.

Google Scholar

Grimby, G., Gudjonsson, G., Rodhe, M., Sunnerhagen, K. S., Sundh, V., and Ostensson, M. L. (1996). The functional independence measure in Sweden: experience for outcome measurement in rehabilitation medicine. Scand. J. Rehabil. Med. 28, 51–62.

Google Scholar

Gronwall, D. M. A. (1977). Paced auditory serial-addition task: a measure of recovery from concussion. Percept. Mot. Skills 44, 367–373.

Google Scholar

Hadad, S. Y., Fung, J., Weiss, P. L., Perez, C., Mazer, B., Levin, M. F., et al. (2012). Rehabilitation tools along the reality continuum: from mock-up to virtual interactive shopping to a living lab. In Proceedings of the international conference on disability, virtual reality and associated technologies, Laval, France, 47–52

Google Scholar

Heaton, R. K. (1981). Wisconsin card sorting test manual. Odessa, FL: Psychological Assessment Resources.

Google Scholar

Heaton, R. K. (2005). Wisconsin card sorting test-64: Computer version 2 – research edition. (WCST-64CV2) Odessa: Psychological Assessment Resources.

Google Scholar

Henry, J. D., and Crawford, J. R. (2004). A meta-analytic review of verbal fluency performance in patients with traumatic brain injury. Neuropsychology 18, 621–628. doi: 10.1037/0894-4105.18.4.621

CrossRef Full Text | Google Scholar

Hilton, D., Cobb, S., Pridmore, T., and Gladman, J. (2002). Virtual reality and stroke rehabilitation: a tangible interface to an every day task. In Proceedings of the 4th international conference on disability, virtual reality and associated technologies (Citeseer)

Google Scholar

Holleman, G. A., Hooge, I. T. C., Kemner, C., and Hessels, R. S. (2020). The ‘real-world approach’ and its problems: a critique of the term ecological validity. Front. Psychol. 11. doi: 10.3389/fpsyg.2020.00721

CrossRef Full Text | Google Scholar

Hooper, E. H. (1983). Hooper visual organization test (VOT). Los Angeles: Western Psychological Services.

Google Scholar

Huang, X., Naghdy, F., Naghdy, G., and Du, H. (2017). Clinical effectiveness of combined virtual reality and robot assisted fine hand motion rehabilitation in subacute stroke patients. In 2017 international conference on rehabilitation robotics (ICORR), 511–515

Google Scholar

IPQ. (n.d.). igroup presence questionnaire (IPQ) overview | igroup.org – project consortium. Available at: http://www.igroup.org/pq/ipq/index.php (Accessed July 21, 2023).

Google Scholar

Jacoby, M., Averbuch, S., Sacher, Y., Katz, N., Weiss, P. L., and Kizony, R. (2013). Effectiveness of executive functions training within a virtual supermarket for adults with traumatic brain injury: a pilot study. IEEE Trans. Neural Syst. Rehabil. Eng. 21, 182–190. doi: 10.1109/TNSRE.2012.2235184

PubMed Abstract | CrossRef Full Text | Google Scholar

Johns, M. W. (1991). A new method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep 14, 540–545. doi: 10.1093/sleep/14.6.540

CrossRef Full Text | Google Scholar

Josman, N., Hof, E., Klinger, E., Marie, R. M., Goldenberg, K., Weiss, P. L., et al. (2006). “Performance within a virtual supermarket and its relationship to executive functions in post-stroke patients.” in In 2006 international workshop on virtual rehabilitation. 106–109.

Google Scholar

Josman, N., Kizony, R., Hof, E., Goldenberg, K., Weiss, P. L., and Klinger, E. (2014). Using the virtual action planning-supermarket for evaluating executive functions in people with stroke. J. Str. Cerebro. Dis. 23, 879–887.

Google Scholar

Jovanovski, D., Zakzanis, K., Ruttan, L., Campbell, Z., Erb, S., and Nussbaum, D. (2012). Ecologically valid assessment of executive dysfunction using a novel virtual reality task in patients with acquired brain injury. Appl. Neuro. Adu. 19, 207–220. doi: 10.1080/09084282.2011.643956

CrossRef Full Text | Google Scholar

Kang, Y. J., Ku, J., Han, K., Kim, S. I., Yu, T. W., Lee, J. H., et al. (2008). Development and clinical trial of virtual reality-based cognitive assessment in people with stroke: preliminary study. Cyber Psychol. Behav. 11, 329–339. doi: 10.1089/cpb.2007.0116

CrossRef Full Text | Google Scholar

Kaplan, E., Goodglass, H., and Weintraub, S. (n.d.). Boston naming test (BNT). [Database record]. APA PsycTests. doi: 10.1037/t27208-000

CrossRef Full Text | Google Scholar

Katz, N., Itzkovich, M., Averbuch, S., and Elazar, B. (1989). Loewenstein occupational therapy cognitive assessment (LOTCA) battery for brain-injured patients: reliability and validity. Am. J. Occup. Ther. 43, 184–192.

Google Scholar

Katz, N., Ring, H., Naveh, Y., Kizony, R., Feintuch, U., and Weiss, P. L. (2005). Interactive virtual environment training for safe street crossing of right hemisphere stroke patients with unilateral spatial neglect. Disabil. Rehabil. 27, 1235–1244. doi: 10.1080/09638280500076079

CrossRef Full Text | Google Scholar

Kazui, H., Hirono, N., Hashimoto, M., Nakano, Y., Matsumoto, K., Takatsuki, Y., et al. (2006). Symptoms underlying unawareness of memory impairment in patients with mild Alzheimer’s disease. J. Geriatr. Psychiatry Neurol. 19, 3–12. doi: 10.1177/0891988705277543

CrossRef Full Text | Google Scholar

Keidser, G., Naylor, G., Brungart, D. S., Caduff, A., Campos, J., Carlile, S., et al. (2020). The quest for ecological validity in hearing science: what it is, why it matters, and how to advance it. Ear Hear. 41, 5S–19S. doi: 10.1097/AUD.0000000000000944

PubMed Abstract | CrossRef Full Text | Google Scholar

Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3, 203–220.

Google Scholar

Kim, J., Kim, K., Kim, D. Y., Chang, W. H., Park, C.-I., Ohn, S. H., et al. (2007). Virtual environment training system for rehabilitation of stroke patients with unilateral neglect: crossing the virtual street. Cyberpsychol. Behav. 10, 7–15. doi: 10.1089/cpb.2006.9998

CrossRef Full Text | Google Scholar

Kim, H. K., Park, J., Choi, Y., and Choe, M. (2018). Virtual reality sickness questionnaire (VRSQ): motion sickness measurement index in a virtual reality environment. Appl. Ergon. 69, 66–73. doi: 10.1016/j.apergo.2017.12.016

CrossRef Full Text | Google Scholar

Koch, I., Poljac, E., Müller, H., and Kiesel, A. (2018). Cognitive structure, flexibility, and plasticity in human multitasking—an integrative review of dual-task and task-switching research. Psychol. Bull. 144:557. doi: 10.1037/bul0000144

CrossRef Full Text | Google Scholar

Kourtesis, P., Collina, S., Doumas, L. A., and MacPherson, S. E. (2019). Technological competence is a pre-condition for effective implementation of virtual reality head mounted displays in human neuroscience: a technological review and meta-analysis. Front. Hum. Neurosci. 13:342. doi: 10.3389/fnhum.2019.00342

CrossRef Full Text | Google Scholar

Kourtesis, P., Korre, D., Collina, S., Doumas, L. A. A., and MacPherson, S. E. (2020). Guidelines for the development of immersive virtual reality software for cognitive neuroscience and neuropsychology: the development of virtual reality everyday assessment lab (VR-EAL), a neuropsychological test battery in immersive virtual reality. Front. Comput. Sci. 1:12. doi: 10.3389/fcomp.2019.00012

CrossRef Full Text | Google Scholar

Kourtesis, P., and MacPherson, S. E. (2021). How immersive virtual reality methods may meet the criteria of the National Academy of neuropsychology and American Academy of clinical neuropsychology: a software review of the virtual reality everyday assessment lab (VR-EAL). Comput. Hum. Behav. Reports 4:100151. doi: 10.1016/j.chbr.2021.100151

CrossRef Full Text | Google Scholar

Kourtesis, P., and MacPherson, S. E. (2023). An ecologically valid examination of event-based and time-based prospective memory using immersive virtual reality: the influence of attention, memory, and executive function processes on real-world prospective memory. Neuropsychol. Rehabil. 33, 255–280. doi: 10.1080/09602011.2021.2008983

CrossRef Full Text | Google Scholar

Krasny-Pacini, A., Evans, J., Sohlberg, M. M., and Chevignard, M. (2016). Proposed criteria for appraising goal attainment scales used as outcome measures in rehabilitation research. Arch. Phys. Med. Rehabil. 97, 157–170. doi: 10.1016/j.apmr.2015.08.424

CrossRef Full Text | Google Scholar

Krch, D., Nikelshpur, O., Lavrador, S., Chiaravalloti, N. D., Koenig, S., and Rizzo, A. (2013). Pilot results from a virtual reality executive function task. In 2013 international conference on virtual rehabilitation (ICVR), 15–21

Google Scholar

Krupp, L. B., LaRocca, N. G., Muir-Nash, J., and Steinberg, A. D. (1989). The fatigue severity scale: application to patients with multiple sclerosis and systemic lupus erythematosus. Arch. Neurol. 46, 1121–1123. doi: 10.1001/archneur.1989.00520460115022

CrossRef Full Text | Google Scholar

Kwon, J.-S., Lyoo, I.-K., Hong, K.-S., Yeon, B.-K., and Ha, K.-S. (2002). Development and standardization of the computerized memory assessment for Korean adults. J. Kore. Neur. Assoc. 41, 347–362.

Google Scholar

Laugwitz, B., Held, T., and Schrepp, M. (2008). “Construction and evaluation of a user experience questionnaire” in HCI and Usability for Education and Work: 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2008, Graz, Austria, November 20-21, 2008. Proceedings 4 (Berlin Heidelberg: Springer), 63–76.

Google Scholar

Laver, K., George, S., Thomas, S., Deutsch, J. E., and Crotty, M. (2015). Virtual reality for stroke rehabilitation: an abridged version of a Cochrane review. Eur. J. Phys. Rehabil. Med. 51, 497–506.

Google Scholar

Laver, K. E., Lange, B., George, S., Deutsch, J. E., Saposnik, G., and Crotty, M. (2017). Virtual reality for stroke rehabilitation. Coch. Data. Syst. Rev. 2018:CD008349. doi: 10.1002/14651858.CD008349.pub4

PubMed Abstract | CrossRef Full Text | Google Scholar

Lloyd, J., Powell, T. E., Smith, J., and Persaud, N. V. (2006). Use of a virtual-reality town for examining route-memory, and techniques for its rehabilitation in people with acquired brain injury. Available at: https://www.researchgate.net/publication/228348119_Use_of_a_virtual-reality_town_for_examining_route-memory_and_techniques_for_its_rehabilitation_in_people_with_acquired_brain_injury (Accessed December 13, 2016)

Google Scholar

Lloyd, J., Riley, G. A., and Powell, T. E. (2009). Errorless learning of novel routes through a virtual town in people with acquired brain injury. Neuropsychol. Rehabil. 19, 98–109. doi: 10.1080/09602010802117392

CrossRef Full Text | Google Scholar

Lorentz, L., Simone, M., Zimmermann, M., Studer, B., Suchan, B., Althausen, A., et al. (2021). Evaluation of a VR prototype for neuropsychological rehabilitation of attentional functions. Virt. Real. 27, 187–199. doi: 10.1007/s10055-021-00534-1

CrossRef Full Text | Google Scholar

Luca, R. D., Russo, M., Naro, A., Tomasello, P., Leonardi, S., Santamaria, F., et al. (2018). Effects of virtual reality-based training with BTs-nirvana on functional recovery in stroke patients: preliminary considerations. Int. J. Neurosci. 128, 791–796. doi: 10.1080/00207454.2017.1403915

CrossRef Full Text | Google Scholar

Liu, L., Miyazaki, M., and Watson, B. (1999). Norms and validity of the DriVR: A virtual reality driving assessment for persons with head injuries. Cyber. Behav. 2, 53–67.

Google Scholar

Lyle, R. C. (1981). A performance test for assessment of upper limb function in physical rehabilitation treatment and research. Int. J. Rehabil. Res. 4, 483–492.

Google Scholar

Maggio, M. G., Latella, D., Maresca, G., Sciarrone, F., Manuli, A., Naro, A., et al. (2019). Virtual reality and cognitive rehabilitation in people with stroke: an overview. J. Neurosci. Nurs. 51, 101–105. doi: 10.1097/JNN.0000000000000423

CrossRef Full Text | Google Scholar

Malouin, F., Pichard, L., Bonneau, C., Durand, A., and Corriveau, D. (1994). Evaluating motor recovery early after stroke: comparison of the Fugl-Meyer assessment and the motor assessment scale. Arch. Phys. Med. Rehabil. 75, 1206–1212.

Google Scholar

Marsh, R. L., and Hicks, J. L. (1998). Event-based prospective memory and executive control of working memory. J. Exp. Psychol. Learn. Mem. Cogn. 24, 336–349.

Google Scholar

Matheis, R. J., Schultheis, M. T., Tiersky, L. A., DeLuca, J., Millis, S. R., and Rizzo, A. (2007). Is learning and memory different in a virtual environment? Clin. Neuropsychol. 21, 146–161. doi: 10.1080/13854040601100668

CrossRef Full Text | Google Scholar

Mathiowetz, V., Weber, K., Kashman, N., and Volland, G. (1985). Adult norms for the nine hole peg test of finger dexterity. Occup. Ther. J. Res. 5, 24–38. doi: 10.1177/153944928500500102

CrossRef Full Text | Google Scholar

Mesulam, M.-M. (ed.) (2000). Principles of behavioral and cognitive neurology. 2nd Edn. Oxford, New York: OUP USA.

Google Scholar

Mioshi, E., Dawson, K., Mitchell, J., Arnold, R., and Hodges, J. R. (2006). The Addenbrooke’s cognitive examination revised (ACE-R): a brief cognitive test battery for dementia screening. Int. J. Geriatr. Psychiatry 21, 1078–1085. doi: 10.1002/gps.1610

CrossRef Full Text | Google Scholar

Mittelstaedt, J. M., Wacker, J., and Stelling, D. (2019). VR aftereffect and the relation of cybersickness and cognitive performance. Virtual Reality 23, 143–154. doi: 10.1007/s10055-018-0370-3

CrossRef Full Text | Google Scholar

Mullen, G., and Davidenko, N. (2021). Time compression in virtual reality. Timing Time Percept. 9, 377–392. doi: 10.1163/22134468-bja10034

CrossRef Full Text | Google Scholar

Navarro, M.-D., Lloréns, R., Noé, E., Ferri, J., and Alcañiz, M. (2013). Validation of a low-cost virtual reality system for training street-crossing. A comparative study in healthy, neglected and non-neglected stroke individuals. Neuropsychol. Rehabil. 23, 597–618. doi: 10.1080/09602011.2013.806269

PubMed Abstract | CrossRef Full Text | Google Scholar

Naveh, Y., Katz, N., and Weiss, T. (2000). The effect of interactive virtual environment training on independent safe street crossing of right CVA patients with unilateral spatial neglect. Available at: https://www.researchgate.net/publication/228605928_The_effect_of_interactive_virtual_environment_training_on_independent_safe_street_crossing_of_right_CVA_patients_with_unilateral_spatial_neglect (Accessed December 13, 2016).

Google Scholar

Neisser, U. (1982). Memory: what are the important questions. Memory observed: Remembering in natural contexts. 3–9.

Google Scholar

Nelson, H., and Willison, J. (1991). National Adult Reading Test (NART), 2nd ed. Test Manual, Berkshire. NFERNELSON Publishing Company Ltd.

Google Scholar

Nesbitt, K., Davis, S., Blackmore, K., and Nalivaiko, E. (2017). Correlating reaction time and nausea measures with traditional measures of cybersickness. Displays 48, 1–8. doi: 10.1016/j.displa.2017.01.002

CrossRef Full Text | Google Scholar

Nir-Hadad, S. Y., Weiss, P. L., Waizman, A., Schwartz, N., and Kizony, R. (2015). A virtual shopping task for the assessment of executive functions: validity for people with stroke. Neuropsychol. Rehabil. 1–26. doi: 10.1080/09602011.2015.1109523

CrossRef Full Text | Google Scholar

O’Brien, J. (2007). “Simulating the homes of stroke patients: can virtual environments help to promote engagement in therapy activities?.” in Virtual Rehabil. In 2007 Virtual Rehabilitation. (IEEE), 23–28.

Google Scholar

Ogourtsova, T., Archambault, P., Sangani, S., and Lamontagne, A. (2018). Ecological virtual reality evaluation of neglect symptoms (EVENS): effects of virtual scene complexity in the assessment of poststroke unilateral spatial neglect. Neurorehabil. Neural Repair 32, 46–61. doi: 10.1177/1545968317751677

CrossRef Full Text | Google Scholar

Okahashi, S., Seki, K., Nagano, A., Luo, Z., Kojima, M., and Futaki, T. (2013). A virtual shopping test for realistic assessment of cognitive function. J. Neuro Engin. Rehabil. 10:59. doi: 10.1186/1743-0003-10-59

CrossRef Full Text | Google Scholar

Oliveira, J., Gamito, P., Lopes, B., Silva, A. R., Galhordas, J., Pereira, E., et al. (2020). Computerized cognitive training using virtual reality on everyday life activities for patients recovering from stroke. Disabil. Rehabil. Assist. Technol. 298–303. doi: 10.1080/17483107.2020.1749891

CrossRef Full Text | Google Scholar

Osterrieth, P. (1954). Filetest de copie d’une figure complex: contribution a l’etude de la perception et de la mémoire. Arch. Psychol. 19, 87–95.

Google Scholar

Park, M.-O. (2015). A comparison of driving errors in patients with left or right hemispheric lesions after stroke. J. Phys. Ther. Sci. 27, 3469–3471. doi: 10.1589/jpts.27.3469

CrossRef Full Text | Google Scholar

Parsons, T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Front. Hum. Neurosci. 9, 660. doi: 10.3389/fnhum.2015.00660

CrossRef Full Text | Google Scholar

Parsons, T. D. (2016). “Neuropsychological rehabilitation 3.0: state of the science” in Clinical neuropsychology and technology (Springer, Cham: Springer International Publishing), 113–132. doi: 10.1007/978-3-319-31075-6_7

CrossRef Full Text | Google Scholar

Parsons, T. D., McMahan, T., and Kane, R. (2018). Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. Clin. Neuropsychol. 32, 16–41. doi: 10.1080/13854046.2017.1337932

CrossRef Full Text | Google Scholar

Patchick, E., Vail, A., Wood, A., and Bowen, A. (2015). PRECiS (patient reported evaluation of cognitive state): psychometric evaluation of a new patient reported outcome measure of the impact of stroke. Clin. Rehabil. 1229–1241. doi: 10.1177/0269215515624480

CrossRef Full Text | Google Scholar

Peeters, D. (2019). Virtual reality: a game-changing method for the language sciences. Psychon. Bull. Rev. 26, 894–900. doi: 10.3758/s13423-019-01571-3

CrossRef Full Text | Google Scholar

Piéron, H. (1955). Metodologia psicotécnica Kapelusz.

Google Scholar

Poncet, F., Swaine, B., Dutil, E., Chevignard, M., and Pradat-Diehl, P. (2017). How do assessments of activities of daily living address executive functions: a scoping review. Neuro. Reh. 27, 618–666. doi: 10.1080/09602011.2016.1268171

CrossRef Full Text | Google Scholar

Qiu, Q., Cronce, A., Patel, J., Fluet, G. G., Mont, A. J., Merians, A. S., et al. (2020). Development of the home based virtual rehabilitation system (HoVRS) to remotely deliver an intense and customized upper extremity training. J. Neuroeng. Rehabil. 17, 1–10. doi: 10.1186/s12984-020-00789-w

CrossRef Full Text | Google Scholar

Rand, D., Katz, P. L., and Weiss, P. L. (2007). Evaluation of virtual shopping in the VMall: comparison of post-stroke participants to healthy control groups. Dis. Reh. 29, 1710–1719. doi: 10.1080/09638280601107450

CrossRef Full Text | Google Scholar

Rand, D., Katz, N., and Weiss, P. L. (2009a). Intervention using the VMall for improving motor and functional ability of the upper extremity in post stroke participants. Eur. J. Phys. Rehabil. Med. 45, 113–121.

Google Scholar

Rand, D., Rukan, S. B.-A., Weiss, P. L., and Katz, N. (2009b). Validation of the virtual MET as an assessment tool for executive functions. Neuropsychol. Rehabil. 19, 583–602. doi: 10.1080/09602010802469074

CrossRef Full Text | Google Scholar

Rand, D., Weiss, P. L., and Katz, N. (2009c). Training multitasking in a virtual supermarket: a novel intervention after stroke. Am. J. Occup. Ther. 63, 535–542. doi: 10.5014/ajot.63.5.535

CrossRef Full Text | Google Scholar

Raspelli, S., Pallavicini, F., Carelli, L., Morganti, F., Pedroli, E., Cipresso, P., et al. (2012). Validating the neuro VR-based virtual version of the multiple errands test: preliminary results. Presence Teleop. Virt. 21, 31–42. doi: 10.1162/PRES_a_00077

CrossRef Full Text | Google Scholar

Reitan, R. M. (1958). Validity of the trail making test as an Indicator of organic brain damage. Percept. Mot. Skills 8, 271–276.

Google Scholar

Renison, B., Ponsford, J., Testa, R., Richardson, B., and Brownfield, K. (2012). The ecological and construct validity of a newly developed measure of executive function: the virtual library task. J. Int. Neuropsychol. Soc. 18, 440–450. doi: 10.1017/S1355617711001883

CrossRef Full Text | Google Scholar

Rizzo, A., Schultheis, M., Kerns, K. A., and Mateer, C. (2004). Analysis of assets for virtual reality applications in neuropsychology. Neuropsychol. Rehabil. 14, 207–239. doi: 10.1080/09602010343000183

CrossRef Full Text | Google Scholar

Robba, C., and Citerio, G. (2023). Highlights in traumatic brain injury research in 2022. Lancet Neurol 22, 12–13. doi: 10.1016/S1474-4422(22)00472-0

CrossRef Full Text | Google Scholar

Romero-Ayuso, D., Castillero-Perea, Á., González, P., Navarro, E., Molina-Massó, J. P., Funes, M. J., et al. (2021). Assessment of cognitive instrumental activities of daily living: a systematic review. Disabil. Rehabil. 43, 1342–1358. doi: 10.1080/09638288.2019.1665720

CrossRef Full Text | Google Scholar

Ruff, R. M., and Parker, S. B. (1993). Gender-and age-specific changes in motor speed and eye-hand coordination in adults: normative values for the finger tapping and grooved pegboard tests. Percept. Mot. Skills 76, 1219–1230. doi: 10.2466/pms.1993.76.3c.1219

CrossRef Full Text | Google Scholar

Ruggiero, K. J., Ben, K. D., Scotti, J. R., and Rabalais, A. E. (2003). Psychometric properties of the PTSD checklist—civilian version. J. Trauma. Stress. 16, 495–502. doi: 10.1023/A

CrossRef Full Text | Google Scholar

Saredakis, D., Szpak, A., Birckhead, B., Keage, H. A., Rizzo, A., and Loetscher, T. (2020). Factors associated with virtual reality sickness in head-mounted displays: a systematic review and meta-analysis. Front. Hum. Neurosci. 14:96. doi: 10.3389/fnhum.2020.00096

CrossRef Full Text | Google Scholar

Schuhfried, G. (1996). Reha Com. G. Schuhfried Gmb H, Mödling.

Google Scholar

Segawa, T., Baudry, T., Bourla, A., Blanc, J.-V., Peretti, C.-S., Mouchabac, S., et al. (2020). Virtual reality (VR) in assessment and treatment of addictive disorders: a systematic review. Front. Neurosci. 13:1409. doi: 10.3389/fnins.2019.01409

CrossRef Full Text | Google Scholar

Slater, M. (1999). Measuring presence: a response to the Witmer and Singer presence questionnaire. Presence 8, 560–565. doi: 10.1162/105474699566477

CrossRef Full Text | Google Scholar

Slater, M., Banakou, D., Beacco, A., Gallego, J., Macia-Varela, F., and Oliva, R. (2022). A separate reality: an update on place illusion and plausibility in virtual reality. Front. Virtual Reality 3:914392. doi: 10.3389/frvir.2022.914392S

CrossRef Full Text | Google Scholar

Smith, A. (1982). Symbol digit modalities test: Manual. Los Angeles: Western Psychological Services.

Google Scholar

Sorita, E., Joseph, P. A., N’kaoua, B., Ruiz, J., Simion, A., Mazaux, J. M., et al. (2014). Performance analysis of adults with acquired brain injury making errands in a virtual supermarket. Ann. Phys. Rehabil. Med. 57:e85. doi: 10.1016/j.rehab.2014.03.415

CrossRef Full Text | Google Scholar

Sorita, E., N’Kaoua, B., Larrue, F., Criquillon, J., Simion, A., Sauzéon, H., et al. (2013). Do patients with traumatic brain injury learn a route in the same way in real and virtual environments? Disabil. Rehabil. 35, 1371–1379. doi: 10.3109/09638288.2012.738761

CrossRef Full Text | Google Scholar

Spreen, O., and Strauss, E. (1991). A compendium of neuropsychological tests: Administration, norms, and commentary. Oxford, New York: Oxford University Press.

Google Scholar

Spreen, O., and Strauss, E. (1998). A compendium of neuropsychological tests : Oxford University Press. New York, 213–218.

Google Scholar

Spreij, L. A., Ten Brink, A. F., Visser-Meily, J. M., and Nijboer, T. C. (2020a). Simulated driving: the added value of dynamic testing in the assessment of visuo-spatial neglect after stroke. J. Neuro. 14, 28–45. doi: 10.1111/jnp.12172

CrossRef Full Text | Google Scholar

Spreij, L. A., Visser-Meily, J. M., Sibbel, J., Gosselt, I. K., and Nijboer, T. C. (2020b). Feasibility and user-experience of virtual reality in neuropsychological assessment following stroke. Neuro. Reh. 32, 499–519. doi: 10.1080/09602011.2020.1831935

CrossRef Full Text | Google Scholar

Spreij, L. A., Visser-Meily, J. M., Sibbel, J., Gosselt, I. K., and Nijboer, T. C. (2022). Feasibility and user-experience of virtual reality in neuropsychological assessment following stroke. Neuropsychological Rehabilitation 32, 499–519.

Google Scholar

Taub, E., Miller, N. E., Novack, T. A., Cook, E. W., Fleming, W. C., Nepomuceno, C. S., et al. (1993). Technique to improve chronic motor deficit after stroke. Arch. Phys. Med. Rehabil. 74, 347–354.

Google Scholar

Thielbar, K., Spencer, N., Tsoupikova, D., Ghassemi, M., and Kamper, D. (2020). Utilizing multi-user virtual reality to bring clinical therapy into stroke survivors’ homes. J. Hand Ther. 33, 246–253. doi: 10.1016/j.jht.2020.01.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Titov, N., and Knight, R. G. (2005). A computer-based procedure for assessing functional cognitive skills in patients with neurological injuries: the virtual street. Brain Inj. 19, 315–322. doi: 10.1080/02699050400013725

CrossRef Full Text | Google Scholar

Toulouse, E., and Piéron, H. (1904). Technique de Psychologie Experimentale: Examen des Sujets. Paris Octave Doin, Editeur.

Google Scholar

Triandafilou, K. M., Tsoupikova, D., Barry, A. J., Thielbar, K. N., Stoykov, N., and Kamper, D. G. (2018). Development of a 3D, networked multi-user virtual reality environment for home therapy after stroke. J. Neuroeng. Rehabil. 15, 1–13. doi: 10.1186/s12984-018-0429-0

CrossRef Full Text | Google Scholar

Tsao, C. W., Aday, A. W., Almarzooq, Z. I., Anderson, C. A., Arora, P., Avery, C. L., et al. (2023). Heart disease and stroke statistics—2023 update: a report from the American Heart Association. Circulation 147, e93–e621.

Google Scholar

Vasser, M., and Aru, J. (2020). Guidelines for immersive virtual reality in psychological research. Curr. Opin. Psychol. 36, 71–76. doi: 10.1016/j.copsyc.2020.04.010

CrossRef Full Text | Google Scholar

Vecchiato, G., Tieri, G., Jelic, A., De Matteis, F., Maglione, A. G., and Babiloni, F. (2015). Electroencephalographic correlates of sensorimotor integration and embodiment during the appreciation of virtual architectural environments. Front. Psychol. 6. doi: 10.3389/fpsyg.2015.01944

CrossRef Full Text | Google Scholar

Vourvopoulos, A., Blanco-Mora, D. A., Aldridge, A., Jorge, C., Figueiredo, P. I., and Badia, S. B. (2022). “Enhancing Motor-Imagery Brain-Computer Interface Training With Embodied Virtual Reality: A Pilot Study With Older Adults” in In 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) (IEEE), 157–162. doi: 10.1109/MetroXRAINE54828.2022.9967664

CrossRef Full Text | Google Scholar

Vourvopoulos, A., Faria, A. L., Ponnam, K., and Bermudez i Badia, S. (2014). RehabCity: design and validation of a cognitive assessment and rehabilitation tool through gamified simulations of activities of daily living. In Proceedings of the 11th conference on advances in computer entertainment technology ACE, New York, NY, USA: ACM), 26: 1–26: 8

Google Scholar

Wagner, S., Belger, J., Joeres, F., Thöne-Otto, A., Hansen, C., Preim, B., et al. (2021). iVRoad: immersive virtual road crossing as an assessment tool for unilateral spatial neglect. Comput. Graph. 99, 70–82. doi: 10.1016/j.cag.2021.06.013

CrossRef Full Text | Google Scholar

Wald, J. L., Liu, L., and Reil, S. (2000). Concurrent validity of a virtual reality driving assessment for persons with brain injury. Cyber Psychol. Behav. 3, 643–654.

Google Scholar

Ware, J. (1993). SF-36 health survey: manual and interpretation guide. Health institute. Available at: (https://ci.nii.ac.jp/naid/10011164820/).

Google Scholar

Weber, E., Goverover, Y., and DeLuca, J. (2019). Beyond cognitive dysfunction: relevance of ecological validity of neuropsychological tests in multiple sclerosis. Mult. Scler. J. 25, 1412–1419. doi: 10.1177/1352458519860318

CrossRef Full Text | Google Scholar

Wechsler, D. (1997a). Wechsler memory scale. New York: Third edition manual. Psychological Corporation.

Google Scholar

Wechsler, D. (1997b). Weschsler Adult Intelligence Scale-III. New York: The Psychological Corporation.

Google Scholar

Weiss, P. L. T., Naveh, Y., and Katz, N. (2003). Design and testing of a virtual environment to train stroke patients with unilateral spatial neglect to cross a street safely. Occup. Ther. Int. 10, 39–55. doi: 10.1002/oti.176

CrossRef Full Text | Google Scholar

Werner, P., and Korczyn, A. D. (2012). Willingness to use computerized Systems for the Diagnosis of dementia: testing a theoretical model in an Israeli sample. Alzheimer Dis. Assoc. Disord. 26, 171–178. doi: 10.1097/WAD.0b013e318222323e

CrossRef Full Text | Google Scholar

Willer, B., Ottenbacher, K. J., and Coad, M. L. (1994). The community integration questionnaire. A comparative examination. Am. J. Phys. Med. Rehabil. 73, 103–111.

Google Scholar

Willer, B., Rosenthal, M., Kreutzer, J., Gordon, W., and Rempel, R. (1993). Assessment of community integration following rehabilitation for traumatic brain injury. J. Head Trauma Rehabil. 8, 75–87.

Google Scholar

Wilson, B. A., Alderman, N., Burgess, P. W., Emslie, H., and Evans, J. J. (1996). Behavioural assessment of the Dysexecutive syndrome (BADS). Bury St. Edmunds, England: Thames Valley Test Company.

Google Scholar

Wilson, B., Cockburn, J., and Baddeley, A. (1987a). Rivermead Behavioural memory test. London: Thames Valley Test Company.

Google Scholar

Wilson, B., Cockburn, J., and Halligan, P., Thames Valley Test Company (1987b). Behavioural inattention test: Manual. Fareham: Thames Valley Test Co.

Google Scholar

Wilson, B. A., Emslie, H., Foley, J., Sheil, A., Watson, P., Hawkins, K., et al. (2005). The Cambridge prospective memory test. London: Harcourt Assessment.

Google Scholar

Witmer, B. G., and Singer, M. J. (1998). Measuring presence in virtual environments: a presence questionnaire. Presence Teleop. Virt. 7, 225–240. doi: 10.1162/105474698565686

CrossRef Full Text | Google Scholar

Wolf, S. L., Catlin, P. A., Ellis, M., Archer, A. L., Morgan, B., and Piacentino, A. (2001). Assessing Wolf motor function test as outcome measure for research in patients after stroke. Stroke 32, 1635–1639. doi: 10.1161/01.STR.32.7.1635

CrossRef Full Text | Google Scholar

Yin, C. W., Sien, N. Y., Ying, L. A., Chung, S. F.-C. M., and Leng, T. M. (2014). Virtual reality for upper extremity rehabilitation in early stroke: a pilot randomized controlled trial. Clin. Rehabil. 28, 1107–1114. doi: 10.1177/0269215514532851

PubMed Abstract | CrossRef Full Text | Google Scholar

Yip, B. C., and Man, D. W. (2013). Virtual reality-based prospective memory training program for people with acquired brain injury. NeuroRehabilitation 32, 103–115. doi: 10.3233/NRE-130827

CrossRef Full Text | Google Scholar

Zaidi, S. F. M., Duthie, C., Carr, E., and Maksoud, S. H. A. E. (2018). Conceptual framework for the usability evaluation of gamified virtual reality environment for non-gamers. In Proceedings of the 16th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry VRCAI’18, New York, NY, USA: Association for Computing Machinery, 1–4

Google Scholar

Zanier, E. R., Zoerle, T., Di Lernia, D., and Riva, G. (2018). Virtual reality for traumatic brain injury. Front. Neurol. 9:345. doi: 10.3389/fneur.2018.00345

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, L., Abreu, B. C., Masel, B., Scheibel, R. S., Christiansen, C. H., Huddleston, N., et al. (2001). Virtual reality in the assessment of selected cognitive function after brain injury. Am. J. Phys. Med. Rehabil. 80, 597–604. doi: 10.1097/00002060-200108000-00010

CrossRef Full Text | Google Scholar

Zhang, L., Abreu, B. C., Seale, G. S., Masel, B., Christiansen, C. H., and Ottenbacher, K. J. (2003). A virtual reality environment for evaluation of a daily living skill in brain injury rehabilitation: reliability and validity. Arch. Phys. Med. Rehabil. 84, 1118–1124. doi: 10.1016/S0003-9993(03)00203-X

CrossRef Full Text | Google Scholar

Zimmerman, P., and Fimm, B. (1992). Test Batterie zur Aufmerksamkeitsprüfung (TAP). Würselen: Psytest.

Google Scholar

Zygouris, S., and Tsolaki, M. (2015). Computerized cognitive testing for older adults: a review. Am. J. Alzheimers Dis. Other Dement. 30, 13–28. doi: 10.1177/1533317514522852

CrossRef Full Text | Google Scholar

Keywords: ecological validity, virtual reality, assessment, rehabilitation, acquired brain injury, activities of daily living

Citation: Faria AL, Latorre J, Silva Cameirão M, Bermúdez i Badia S and Llorens R (2023) Ecologically valid virtual reality-based technologies for assessment and rehabilitation of acquired brain injury: a systematic review. Front. Psychol. 14:1233346. doi: 10.3389/fpsyg.2023.1233346

Received: 02 June 2023; Accepted: 03 August 2023;
Published: 29 August 2023.

Edited by:

Lawrence M. Parsons, The University of Sheffield, United Kingdom

Reviewed by:

Fabrizio Stasolla, Giustino Fortunato University, Italy
Panagiotis Kourtesis, National and Kapodistrian University of Athens, Greece
Emilia Biffi, Eugenio Medea (IRCCS), Italy

Copyright © 2023 Faria, Latorre, Silva Cameirão, Bermúdez i Badia and Llorens. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ana Lúcia Faria, YW5hZmFyaWFAc3RhZmYudW1hLnB0

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.