Specialty Grand Challenge ARTICLE
Grand challenges in virtual environments
- 1Event Laboratory, Faculty of Psychology, ICREA-University of Barcelona, Barcelona, Spain
- 2Department of Computer Science, University College London, London, UK
In his 1999 article that reviewed the state of virtual reality (VR) Prof. Fred Brooks Jr. noted that at that time the field had made great technical advances over the previous 5 years. Brooks (1999) wrote “I think our technology has crossed over the pass – VR that used to almost work now barely works. VR is now really real.” What is the state today? Needless to say 15 years later, it is the case that not only does VR “really work” but that it has become a commonplace tool in many areas of science, technology, psychological therapy, medical rehabilitation, marketing, and industry, and is surely about to become commonplace in the home.
Let us very briefly reprise the major technologies reviewed by Brooks (1999). On the display side, projection systems such as Caves (Cruz-Neira et al., 1992) have advanced to very high resolution, based on multiple panel displays with automatic and seamless image alignment (Brown et al., 2005; Defanti et al., 2011). Head-mounted displays (HMDs) have improved dramatically – presenting wide field-of-view high-resolution images, with very recent advances toward lightweight, low cost, and consumer oriented devices with wide field-of-view and acceptable resolution. Beyond specialized devices there is the promise of yet a further development with already usable HMDs made from 3D printable plastic frames and cheap lenses that house a Smartphone (Olson et al., 2011; Hoberman et al., 2012; Steed and Julier, 2013).
Devices for virtual and augmented reality displays typically require the participant to wear specialized glasses. Autostereo displays obviate the need for this but offer a limited continuous field-of-view (Holliman et al., 2011). An important grand challenge in the display area is the provision of high quality full field-of-view stereo-displays that do not require special glasses, where there is a seamless blend between reality and VR. Steps are being made in this direction (Hilliges et al., 2012) and also there is the development of such displays based on the low cost consumer devices (Maimone et al., 2012).
The other side of the equation to display is tracking. In recent years whole body tracking has become a relatively low cost product enabling real-time motion capture. But just as with HMDs, the arrival of consumer oriented tracking systems and depth cameras, marketed for computer games, is likely to revolutionize how real-time full body tracking and probably head-tracking are likely to develop. Head and body tracking over wide areas with low latency and high accuracy remains a significant challenge.
One thing is clear from the above – VR is moving to the home. However, the vast majority of the studies that assess various aspects of the responses of people to VR experiences typically involve very short one-off exposure times – a notable exception being (Steed et al., 2003). Due to the move to the consumer market this situation obviously will and must change – although simulator sickness may still be a significant problem. If people in their millions will be using these systems for hours a week, we really need to understand the impact of this on their lives and the resulting social impact, as well as seeing this as an opportunity to carry out massive experimental studies of scientific interest that also inform the technology.
The above gives the impression that VR is defined by devices and associated software. However, it is more useful to consider VR conceptually as a technological system that can precisely substitute a person’s sensory input and transform the meaning of their motor outputs with reference to an exactly knowable alternate reality (“knowable” to distinguish from dreams or hallucinogenic experiences). In this view motor actions and sensory input are not separable. In order to perceive it is necessary to act – to move and position the body, head, eyes, ears, nose, and end-effectors, in active perception. There are implicit rules that we acquire that integrate perception and action, referred to as sensorimotor contingencies (O’Regan and Noë, 2001; Noë, 2004). The affordance of natural sensorimotor contingencies for perception by a VR system is a key to the generation of the fundamental illusion that people typically experience in an immersive VR – the illusion of “presence,” “being there,” or “place illusion (PI)” (Sanchez-Vives and Slater, 2005; Slater, 2009).
Sensorimotor contingencies refer to acts of perception by the participant that change his or her sensory stream. However, there will typically also be autonomous changes or events in the sensory stream that are not caused by the actions of the participant. When such events, not under the control of the participant, nevertheless contingently occur in response to participant actions this can give rise to a second illusion – that the perceived events are really happening (even though they are known for sure to be occurring in a fake reality). This “plausibility illusion (Psi)” refers therefore to the dynamic aspects of the virtual environment, whereas PI can occur in a completely static environment. Psi also requires that if the situation being simulated is one that could be encountered in reality, then it must meet at least minimal expectations as to how that reality functions. This heralds the fact that achieving Psi is a significantly greater challenge than PI. Whereas PI relies on deterministic features such as the properties of the display and tracking systems, Psi requires ecologically valid interactions between the environment and the participant, and designs that take into account the expectations of people – and hence require a high level of domain knowledge.
Sensory motor contingencies include reaching out and touching. Moreover, if an object or virtual human collides with the participant, then an expected correlate of this event would be to feel the corresponding touch and force. In spite of significant research and advances in haptic technology, this possibility is clearly the least developed in a general sense. Of course there are haptic devices that provide feedback for highly specific situations that were already discussed by Brooks (1999) that provide excellent point contact including some level of tactile and force feedback for highly specific setups. Additionally there are exoskeleton devices that can provide force feedback to different parts of the body. At the low cost end there are devices that can supply vibrotactile stimulation or air pressure through a vest, and even a relatively low cost ingenious device that uses air vortex generation to deliver haptics at a distance (Sodhi et al., 2013). However, these do not address the general problem of haptics.
Imagine the following scenario: while “walking” through an environment, your elbow happens to brush against a wall. Later you pass by a water fountain and stop for a drink. A few minutes later you accidentally walk into a door and bump your head. A person walking by claims to recognize you and grabs your hand to vigorously shake it. Later you step onto an up escalator …. Comparing with the visual system, a HMD can in principle be used to display any type of visual stimulus; a single binaural auditory setup can be used for many different types of auditory stimuli. However, there is no single general-purpose haptic device that could be used to deliver appropriate tactile and force feedback stimuli for these examples. In fact, there may not be any haptic device to reasonably represent even one of them. For example, specialized bulky single-purpose devices are required today even to deliver a crude approximation of something as common place as a human handshake (Giannopoulos et al., 2011).
If we wait long enough will these problems be solved through technological advances? Of course, to some extent this will be the case. But there are also limits imposed by the laws of physics. In what I believe is one of the most important theoretical articles ever written about VR, the physicist Prof. David Deutsch discusses the example of weightlessness where the laws of physics impose that there can be no VR simulation (Deutsch, 2011; chapter 5). A VR can simulate many aspects of the experience of being in a spacecraft – the visual, auditory, the haptic controls and feedback, the visual illusion of floating, etc., which may be sufficient to generate an illusion of weightlessness (which, of course, might be nothing like the true sensation). It cannot, however, simulate actual weightlessness since the laws of physics do not permit this under earth gravity.
Taking this argument further: if VR provides a universal multisensory simulation of reality then one type of activity that it must also be able to simulate is the recursive act of going into and experiencing a VR system. Here, the participant in a VR picks up and dons a (virtual) HMD, earphones, tracking devices, and then enters a simulation of using a VR system. Within that second level VR the participant can enter a third level VR and so on. Slater (2009) argued that this provides a method for a more precise specification of the concept of “immersion”: System VR(A) is more “immersive” than system VR(B), if VR(A) can be used to simulate going into and experiencing VR(B), but not vice versa. Hence “immersion” becomes a relational operator that defines a partial order over all possible abstractions of VR systems (a version of VR within VR has actually been studied, Slater et al., 1994). Since VR is supposed to simulate reality, then this scheme would give rise to the paradox that there exists a VR system that is more immersive than reality! However, there must always be some aspect of reality that cannot be simulated by such a VR system. If not, then the experience provided by the system would be completely indistinguishable from reality and therefore would itself be “reality.” For example, an experience in such a system would require the participant to be amnesic about the fact that they had ever entered into the VR system, requiring a seamless continuity of experience from the real to the virtual. More simply, they have to be completely unaware of the external devices involved in bringing them into the VR. The goal of VR to accurately simulate all aspects of reality is physically infeasible.
This raises the question of the very goal of VR. Is our quest to exactly simulate reality? If this is the case we already know that this cannot be achieved. However, if our goal is to produce an illusion of reality, then we are on much safer ground – since we can bypass the problem of the limits imposed by physics, and instead work directly through the idea of tricking the brain. It is certainly not a new idea that VR aims not at simulating reality but producing illusions – which goes to the very heart of why it works at all. This was encapsulated in the approach of the late Prof. Lawrence Stark (Stark, 1995) – as he once said: “Virtual Reality works because reality is virtual.” If we recognize that VR aims at the production of illusions rather than at reproducing reality then we can explicitly set out to exploit the amazing capabilities of the brain. Even generating feelings of weightlessness may be tractable from this point of view. For example, Ramachandran and Seckel (2010) have shown how simple arrangement of two mirrors can be used with fibromyalgia patients to reduce their pain, and coincidentally give them a feeling of weightlessness.
Virtual reality profits from exploitation of the brain to produce illusions of perception and action. This is like finding loopholes in the brain’s representations and then making use of these to produce an illusory reality. However, following Deutsch (2011) the long term success of VR will necessarily lie in actively stimulating the brain to directly produce illusions of any and every type of sensory stimulus and every type of motor action. In other words VR will need to become a brain interface as the only way of approaching the ideal of a system that can indeed substitute perception of and active engagement in knowable alternate realities, and even meet the requirement that the participant be amnesic about having entered a VR system. This sounds far-fetched but there are advances being made in neuroscience on decoding the structure of visual representation, for example, Kamitani and Tong (2005) and Horikawa et al. (2013), and electrical brain stimulation to allow the blind to regain some vision in sightless people (Merabet et al., 2005). To summarize this outlook, eventually VR will become a branch of applied neuroscience.
Today, however, the relationship between neuroscience and VR is that VR is used as an important tool in the investigation of some neuroscientific problems (Bohil et al., 2011). In particular a great deal of attention over many years has gone into the use of VR in the study of navigation – for example, Tcheang et al., 2011), and more recently into how the brain represents the body (Ehrsson, 2007; Lenggenhager et al., 2007; Petkova and Ehrsson, 2008; Slater et al., 2010). This has also opened a very interesting new application field for VR – since the work on body representation shows that VR can be used not simply for Place Illusion and Plausibility but also to change the self (Yee and Bailenson, 2007; Banakou et al., 2013; Kilteni et al., 2013; Peck et al., 2013).
Finally immersive virtual environments cover a huge field. There are many critical issues not covered here: advanced display and tracking hardware, architectures, systems, and devices; software systems for the implementation of applications and software and hardware architectures for efficiently managing the multiple resources, and devices typically required in a VR application; shared virtual environments with multiple participants; augmented and mixed reality; AI both for adaptively learning about the goals of participants and for controlling virtual human characters; and the close and growing connections between VR and robotics. An important issue, given the necessity for VR to represent the body of the participant and that of others, is the problem of generating high fidelity virtual human characters (and indeed animals in general). Needless to say none of these are “solved” problems, though very significant advances have been made since (Brooks, 1999).
To summarize there are two major grand challenges that will shape the field in the coming years: (1) VR is becoming a mass consumer product. This will impact it at every level – devices must be cheap, safe, and deliver convincing experiences; software should allow people naïve to programing to produce or modify environments; and researchers should take the opportunity for very large scale empirical longitudinal studies of the impact of these systems on the consumers. (2) If the approach to VR is an attempt to simulate reality by devices that produce displays and feedback that approach ever closer to realistic fidelity then we will eventually come up against barriers determined by physics. This is especially the case in the realm of haptics where there seems to be no possibility of a generalized device that would be capable of delivering multiple different types of vestibular, tactile, and force feedback arbitrarily located anywhere on the body. We should explicitly recognize that our best ally is the brain itself, exploiting its illusion generating capacity; and ultimately achieve the highest fidelity VR through direct brain interfaces for the creation of knowable alternate worlds.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Thanks to Maria V. Sanchez-Vives for helpful comments on an earlier version of this paper. The author acknowledges support from the European Research Council Advanced Grant TRAVERSE (#227985).
Banakou, D., Groten, R., and Slater, M. (2013). Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc. Natl. Acad. Sci. U.S.A. 110, 12846–12851. doi: 10.1073/pnas.1306779110
Giannopoulos, E., Wang, Z., Peer, A., Buss, M., and Slater, M. (2011). Comparison of people’s responses to real and virtual handshakes within a virtual environment. Brain Res. Bull. 85, 276–282. doi:10.1016/j.brainresbull.2010.11.012
Hilliges, O., Kim, D., Izadi, S., Weiss, M., and Wilson, A. (2012). “HoloDesk: direct 3d interactions with a situated see-through display”, in Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (New York, NY: ACM), 2421–2430.
Hoberman, P., Krum, D. M., Suma, E. A., and Bolas, M. (2012). “Immersive training games for smartphone-based head mounted displays”, in Virtual Reality Short Papers and Posters (VRW), 2012 IEEE (New York, NY: IEEE), 151–152.
Holliman, N. S., Dodgson, N. A., Favalora, G. E., and Pockett, L. (2011). Three-dimensional displays: a review and applications analysis. IEEE Trans. Broadcast. 57, 362–371. doi:10.1109/TBC.2011.2130930
Maimone, A., Bidwell, J., Peng, K., and Fuchs, H. (2012). Enhanced personal autostereoscopic telepresence system using commodity depth cameras. Comput. Graph. 36, 791–807. doi:10.1016/j.cag.2012.04.011
Merabet, L. B., Rizzo, J. F., Amedi, A., Somers, D. C., and Pascual-Leone, A. (2005). What blindness can tell us about seeing again: merging neuroplasticity and neuroprostheses. Nat. Rev. Neurosci. 6, 71–77. doi:10.1038/nrn1586
Olson, J. L., Krum, D. M., Suma, E. A., and Bolas, M. (2011). “A design for a smartphone-based head mounted display,” in Virtual Reality Conference (VR), 2011 IEEE (Singapore: IEEE), 233–234. doi:10.1109/VR.2011.5759484
Peck, T. C., Seinfeld, S., Aglioti, S. M., and Slater, M. (2013). Putting yourself in the skin of a black avatar reduces implicit racial bias. Conscious. Cogn. 22, 779–787. doi:10.1016/j.concog.2013.04.016
Stark, L. W. (1995). “How virtual reality works: illusions of vision in ‘real’ and virtual environments,” in Proceedings of SPIE. Vol. 2411, Human Vision, Visual Processing, and Digital Display VI. eds B. E. Rogowitz, and J. P. Allebach (San Jose, CA: SPIE), 277. doi:10.1117/12.207546
Steed, A., and Julier, S. (2013). “Design and implementation of an immersive virtual reality system based on a smartphone platform,” in 2013 IEEE Symposium on 3D User Interfaces (3DUI) (New York, NY: IEEE), 43–46.
Steed, A., Spante, M., Heldal, I., Axelsson, A.-S., and Schroeder, R. (2003). “Strangers and friends in caves: an exploratory study of collaboration in networked IPT systems for extended periods of time,” in Proceedings of the 2003 Symposium on Interactive 3D graphics (New York, NY: ACM), 51–54.
Tcheang, L., Bülthoff, H. H., and Burgess, N. (2011). Visual influence on path integration in darkness indicates a multimodal representation of large-scale space. Proc. Natl. Acad. Sci. U.S.A. 108, 1152–1157. doi:10.1073/pnas.1011843108
Keywords: virtual environments, virtual reality, haptics, neuroscience, grand challenges
Citation: Slater M (2014) Grand challenges in virtual environments. Front. Robot. AI 1:3. doi: 10.3389/frobt.2014.00003
Received: 06 May 2014; Accepted: 16 May 2014;
Published online: 27 May 2014.
Edited and reviewed by: Daniel Thalmann, Nanyang Technological University, Singapore
Copyright: © 2014 Slater. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.