Skip to main content

PERSPECTIVE article

Front. Mar. Sci., 17 October 2019
Sec. Ocean Observation
Volume 6 - 2019 | https://doi.org/10.3389/fmars.2019.00644

Virtual Reality and Oceanography: Overview, Applications, and Perspective

  • 1Graduate School of Oceanography, University of Rhode Island, Narragansett, RI, United States
  • 2Center for Computation and Visualization, Brown University, Providence, RI, United States
  • 3Department of Computer Science, Brown University, Providence, RI, United States
  • 4Ocean Ecology Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, United States
  • 5GESTAR/Universities Space Research Association, Columbia, MD, United States

With the ongoing, exponential increase in ocean data from autonomous platforms, satellites, models, and in particular, the growing field of quantitative imaging, there arises a need for scalable and cost-efficient visualization tools to interpret these large volumes of data. With the recent proliferation of consumer grade head-mounted displays, the emerging field of virtual reality (VR) has demonstrated its benefit in numerous disciplines, ranging from medicine to archeology. However, these benefits have not received as much attention in the ocean sciences. Here, we summarize some of the ways that virtual reality has been applied to this field. We highlight a few examples in which we (the authors) demonstrate the utility of VR as a tool for ocean scientists. For oceanic datasets that are well-suited for three-dimensional visualization, virtual reality has the potential to enhance the practice of ocean science.

1. Introduction

Virtual Reality (VR) allows a user to immerse herself in a computer generated environment. The feeling of presence (Slater and Wilbur, 1997) is therein generated by simulating sensory feedback of the environment in response to a user's action. This allows a user to coexist and interact with virtual entities in the same three-dimensional space. Various display technologies have evolved to facilitate these experiences. While most systems only simulate visual and auditory feedback, e.g., head-mounted displays (Sutherland, 1968) or room-scaled CAVE environments (Cruz-Neira et al., 1992), the feedback can also stimulate other senses, e.g., proprioceptive, or haptic sensations. In VR, a user perceives only the computer generated content while the real world is absent. In contrast, Augmented Reality (AR) and Mixed Reality (MR) overlay the virtual simulation on top of the real world creating a mixture of real and virtual feedback perceived in collocated space.

Researchers from a range of scientific disciplines have benefited from the application of virtual reality. At the Brown University Center for Computation and Visualization (CCV), over two decades of interdisciplinary visualization collaborations paved the way for state-of-the-art scientific VR applications today. While VR visualization takes varying degrees of effort to achieve, the benefits of visualizing scientific data in VR include faster analysis, greater spatial understanding, and new types of exploration (LaViola et al., 2009). Interest in developing VR applications has intensified recently with the development of cost-effective consumer grade head-mounted displays (HMD), which have made the benefits of interactive VR-based scientific visualization more widely accessible (Castelvecchi, 2016; Matthews, 2018). With the ongoing, exponential increase in oceanographic datasets resolution, coverage, and diversity, there arises the need for scalable and cost-efficient visualization tools to begin to interpret these large volumes and varieties of data (Huang et al., 2015; Liu et al., 2017).

Here, we describe the outcomes and lessons learned from a collaboration between Brown CCV and oceanographers at the URI Graduate school of Oceanography that we hope will convey the emerging enthusiasm that is the state of VR in ocean science. First, we will review a selection of previous work and provide a guide to getting started with VR visualization. Next, we will provide some more detailed examples from our collaborative work in how we applied these technologies. Finally, we will conclude with a discussion of these applications including some outlook for future developments in this field.

2. Previous Work

Virtual reality allows immersive visualizations of underwater scenes to be experienced, both real (observed) and simulated (modeled) data. The application of VR has steadily increased since the 1990s to today, and we anticipate that this trend will continue, or even accelerate, in the future (Figure 1). In this section, we review a selection of these works to illustrate the impact that VR has to date realized in ocean science. From live VR video feeds to simulated VR environments, from user-centric to animal-centric applications, VR applications have demonstrated a growing array of benefits.

FIGURE 1
www.frontiersin.org

Figure 1. The frequency of ocean science-related publications involving virtual reality has increased since the early 1990s. The search criteria for these works was focused on ocean data visualization-related terms, including both observed and modeled data. Google Scholar key word searches included: VR, virtual reality, immersive, 3D virtual environment, 3D user interfaces, telepresence, marine, underwater, oceanography, oceans, virtual, technology, head-mounted display, HMD, Oculus Rift, HTC Vive, CAVE. This search was concluded after a total of 150 citations was reached.

Early utilization of VR for ocean exploration focused on remotely operated vehicle (ROV) navigation, and these efforts demonstrated the utility of VR for increasing ROV pilot situational awareness in harsh, low-visibility environments (Hine et al., 1994; Fleischer et al., 1995; Stoker et al., 1995; Lin and Kuo, 1998). Fast forward over 25 years, this concept has seen vast technological refinement with the application of off-the-shelf VR components, including HMDs, such as the Oculus Rift, and improved haptic devices, which add a greater field of view, faster head tracking, and more intuitive feedbacks for the remote control of the ROV manipulator arm (Lynch and Ellery, 2014; Candeloro et al., 2015). In addition, the feedback from these haptic control devices (e.g., vibration) can help avoid collisions with expensive equipment (Lynch and Ellery, 2014; Sivčev et al., 2018). These visualizations and controls have also been aided by stereoscopic cameras, which utilize synchronized cameras to take 3D images and have the advantage of mimicking human binocular vision while also enabling more accurate spatial measurement (Shortis et al., 2007). Underwater exploration via ROV now includes methods for underwater 3D mapping, which use laser scanning (Shigematsu and Moriya, 1997; Massot-Campos et al., 2015) and acoustics (Griffiths et al., 1997; Chapman et al., 1999; Palmese and Trucco, 2008) to explore terrain at even higher resolution. Photogrammetric approaches to the reconstruction of underwater 3D maps is a recent development, and provides a cost-effective, accurate, and reproducible method to re-creating marine habitats (Kwasnitschka et al., 2013; Marre et al., 2019). In support of this growing data capacity, network architecture also continues to improve, with internet connected ships (Raineault et al., 2018) and ROVs enabling multiple users to coordinate efforts simultaneously with the aid of real time AR applications (Chouiten et al., 2012).

Onshore, VR has been used to render educational underwater scenes for the benefit of students and the general public, offering interactive access to underwater ocean ecosystems and dynamics via CAVEs and HMDs (Frohlich, 2000; Chen et al., 2012; Jung et al., 2013). Submersible AR and VR applications are a more recent development, and various projects have made use of waterproof hardware to create experiences which combine swimming with animation and actual underwater images (Bellarbi et al., 2013; Oppermann et al., 2016; Costa et al., 2017).

Similarly, VR has enabled novel experiments to study marine megafauna. Rather than using VR to project humans into a simulated environment, captive animals are subjected to virtual environments which mimic their natural environment to trigger behavioral response, e.g., to elicit camouflage (Jaffe et al., 2011; Josef, 2018, Figure 2) or predator-avoidance responses (Butail et al., 2012; Trivedi and Bollmann, 2013). These simulations demonstrate how VR has enabled researchers to pursue new and creative avenues for studying marine physiology and ecology.

FIGURE 2
www.frontiersin.org

Figure 2. Virtual reality used in a laboratory settings to study light stimulus response in Loligo opalescens. Image courtesy of Jules Jaffe, Scripps Institution of Oceanography.

While not strictly ocean science, data visualization has benefited the natural sciences in general through improved data readability, interpretability, and enabled the communication of dynamic four dimensional flows (Lin and Loftin, 1998; Ohno and Kageyama, 2007; Rautenhaus et al., 2017). Computer-generated visualizations of four dimensional flows, such as geophysical models of ocean currents, yield the most complete picture of oceanic processes when visualized in four dimensions (Nations et al., 1996) and exploration of complex datasets using VR can provide a method for quickly detecting patterns and unseen features (Billen et al., 2008).

3. VR Recipe

Virtual reality requires three main ingredients: A VR Display, software capable of displaying VR content, and of course, the content itself. In the following, we will give an overview over the different possibilities commonly available and provide guidelines on how to choose the components.

3.1. VR Displays

While VR can generate feedback for all senses we will only give an overview about visual display devices, as auditory devices usually are simple headphones and devices providing feedback for other modalities, e.g., haptic or olfactory, are not widely used and targets of active research. Visual VR devices can be divided into three different categories: Mobile phone based VR, Consumer grade HMDs and CAVE systems, capable of providing feedback for multiple users. Each device comes with several advantages and disadvantages when regarding immersion, availability, and interaction.

Mobile phone based VR is the most accessible of the technologies currently available. In this case a mobile phone is put into a VR Headset which encompasses two lenses to provide a stereo view. In the simplest case, devices are made out of cardboard1 with costs on the order of several dollars. Immersion in the VR environment is then achieved by rendering the scene for the point of view of a user determined by the internal sensors of the mobile device. However, while current research continues to investigate the use of the internal camera to determine translational movement of the user2, mobile phone VR is currently only able to determine head rotations, which limits the interaction and immersion of a user. In other words, a user can look around in the VR environment, but is not able to move around in the scene by using his real physical motion. A similar limitation exists for interaction with the environment. While some headsets provide a controller, this controller's position is known only by its rotation. This limits the interaction to a laser pointer metaphor. Despite these limitations, the technology is extremely valuable for outreach due to the low cost.

Head-mounted Displays (HMDs) have seen an increase in attention in recent years due to major developments. While 10 years ago HMDs were extremely expensive and immersion not satisfactory due to the limited field of view, development of mobile phone screens and cheaper sensors made design of consumer grade devices possible. Currently several devices are available at costs on the order of several hundred dollars, e.g., HTC Vive, Oculus Rift, or Windows Mixed Reality. Devices usually consist of a display encompassed with two lenses in plastic goggles similar to mobile phone VR. However, they exhibit two major differences to mobile phone VR. While mobile phone VR only uses rotation of a user's head motion or controllers, HMDs use additional sensors to determine translational motion. This enables a user to move and interact in the virtual space as in the real world. A user can move around an object, grab an object or crouch down to see novel perspectives increasing immersion and providing a higher level of fidelity. However, as the name implies, HMDs are only the display. In order to render the virtual scene, a PC with a high quality GPU is required, raising the cost of such a system by one to two thousand dollars. This limits its application as only a limited number of users can participate at the same time and applications are only usable by a smaller subset of people when compared to mobile phone VR. Nonetheless, due to the higher fidelity, better immersion and increased interaction they are best suited for VR applications in the scientific context.

CAVE systems have been widely used at universities and research centers in the past, when results with HMDs were not satisfactory for VR. In contrast to HMDs and mobile phone based VR, the displays are in this case not worn, but surround the users. The position and orientation of the user and (usually) her controllers are determined using motion capture systems; shutter glasses are used to provide a stereoscopic view. This permits rendering the scene for each display as if it is a window to the virtual world, making CAVE systems similar to the well-known Star Trek HoloDeck. Similar to HMDs a user can walk around in the simulated space and interact with its entities freely, but due to its design they provide different advantages and disadvantages. In HMD systems a user does not see her real environment, while in CAVE systems a user can still see his real surroundings. This not only leads to better acceptance and less cyber sickness, but allows use by multiple users, facilitating discussions and collaboration with peers. However, it has to be noted that most CAVE systems, with a few exceptions (Blom et al., 2002; Fröhlich et al., 2005), only track the position of one user which results in the rendering being only optimal for her and diminishes the experience for others. As a drawback, CAVE systems usually require more support for maintenance as well as software development due to their complexity, while the consumer market has made usability and software development for HMDs easier.

3.2. VR Software

In order to render content on a VR device, a VR-capable application is required. Similar to the devices, different possibilities exist depending on the devices used, as well as the fidelity and interaction targeted. Efforts to build the VR experience can range from several minutes to several weeks depending on the tools and the desired interaction. While for certain types of data specialized software exists, we would like to give a short overview of three different approaches freely available to visualize content in VR and highlight their advantages and limitations: ParaView, a VR-capable visualization tool for scientific data; Unity3D, a 3D game engine to create VR applications; and custom software development using traditional programming languages.

Paraview3 is a visualization application widely used in scientific data visualization which supports VR display in HMDs4 as well as CAVE systems5. It supports import and visualization of many different data types and visualization primitives. There is good documentation available including many tutorials, and data can usually be loaded within minutes and presented in VR with the ease of a click. However, interaction in VR with the data is quite limited and it does not support mobile phone VR. However, given the ease and wide range of supported data types and visualizations, it is a valuable tool to see the data in an immersive environment.

Unity3D became the most common design tool for VR applications due to its good support of VR devices as well as its large user base. It also supports many different data types, and behavior of entities in the application can be defined through game logic. Due to the large user base, many tutorials can be found on YouTube. Additional functionality can be added to the application by downloading packages, called “prefabs,” from the Unity Store for free or a small fee. Functionality of prefabs can range from simple three-dimensional models to packages used to plot data, e.g., Immersive Analytics Toolkit6 or packages which simplify interactions in VR like grabbing an object, e.g., Virtual Reality Toolkit7. While most of the application design can be done with the Unity3D user interface, it is still recommended to have some experience with scripting or programming. However, due to the examples and tutorials available, even a novice user can develop a simple VR application within a couple of days. As Unity3D is a tool for designing a 3D application, interaction in the virtual environment is customizable and the deployment and distribution of the final application is easy across different VR systems. As a consequence, most applications recently developed for VR (especially in the sciences) are built using Unity3D.

Custom Software Development can be used to build VR applications using a wide range of programming languages, e.g., Python, MATLAB, C++. However, achieving satisfactory results requires more effort when compared to Unity3D and is only advised if restrictions of the data or the VR system do not permit the use of Unity3D. Several libraries can be used to facilitate the development of VR applications, e.g., MinVR8, but the process still requires significant knowledge of the programming language and the VR system used.

3.3. VR Content

Finally, as a last ingredient the content displayed should also be considered when designing the VR experience, e.g., if data is two-dimensional, visualization in a three-dimensional space will not provide significant advantage over traditional methods. However, if the data is of three or higher dimensions, the visualization in an immersive manner can provide novel perspectives that lead to scientific insight. Combined with custom tools designed to interact with the environment, novel insights can be gained in an exploratory setting. However, it has to be taken into account that while some data can be easily incorporated in its raw state, other data might require additional preprocessing to extract higher dimensional information or efforts have to be undertaken to combine different datatypes in the same reference frame. A VR display for scientific data is actually an experience for the user, and the specifics of the audience and the goal of the visualization deserve attention, as they should have a significant impact on the design of tools required to interact with the data.

4. Our Approach

In the following section, we detail our applications in order to share some lessons learned, to convey caution in some cases, and ideally to inspire other ocean scientists to implement their own VR experience. The three VR applications developed to date by our group provide only a glimpse of the possibilities derived from this rapidly growing technology, and also present a range of challenges. For example, part of our goal was rendering the data in the YURT system which required special algorithms and is not supported by Unity3D. Subsequently, our first two applications were developed using custom software development with C++ after a first exploration of the data in ParaView. Nonetheless, the developed applications were also used in a HMD setting during field work. The third application was developed using Unity3D for HMDs. Representative 2D movies are provided (See Supplementary Information), but evaluation of VR is optimally experienced on a VR-enabled device. Source code has been uploaded9 while the pre-compiled applications for windows can be downloaded10.

4.1. Data Visualization–Autonomous Platform Tracks and Observations

As the data acquired during autonomous underwater platforms (drifting and powered) is associated with their position, depth and time, these datasets are a natural application for VR. The dynamic ocean environment surrounding these vehicles often result in data records that convolve space and time. Rendering these observations in a 3D (x,y,z) setting allows a user to more easily identify aspects of their record that are likely associated with a spatial feature as opposed to a temporal change.

In our testbed application, we wanted to be able to review data from a free-drifting, wave-powered profiler called a Wirewalker, within its hydrographic context Rainville and Pinkel (2001). We used VR to combine Wirewalker sensor data with its geolocated drift paths as well as with corresponding satellite imagery (see Supplementary Movie 1). Given the limited ability of GPS signal to penetrate through water depth, pre-processing steps were necessary to estimate the submerged horizontal coordinates of the Wirewalker. These coordinates were simple, “straight line” approximations between successive surface GPS positions. These positions were linearly interpolated spatially and temporally using MATLAB prior to VR rendering. Visualization of submerged physical and chemical variables were presented using a linearly spaced color scale. However, in future versions of this application we recommend that the color map scales be selected to adhere to best practices for the given variable for improved visual accuracy (Thyng et al., 2016). This spatial and temporal series of vertical profiles of the top 120 m of the water column (Omand et al., 2017) was combined with satellite observations of incident light interacting with particles in the water (also known as ocean color) thus creating a 50 square kilometer ocean color map (NASA Ocean Biology Processing Group, 2015) of the Wirewalker drift track. Successive Wirewalker profiles were used to create an animation of the drift track, while physical and chemical variables were shown for the whole surrounding region using color scales (Figure 3). Using the VR controllers, users could navigate through the virtual seascape via flying through water column profiles, while toggling between different physical and chemical variables recorded by the Wirewalker. For more detailed analysis, the users were also provided tools to analyze data points with a simple pointing gesture. Although C++ was used for our application, this type of data would be readily viewed with Unity3D and is highly recommended for those who plan to use HMDs.

FIGURE 3
www.frontiersin.org

Figure 3. A VR user's view inside the autonomous vehicle data visualization application. Left: Satellite ocean color data (NASA Ocean Biology Processing Group, 2015) is combined with Wirewalker drift tracks and in situ sensors. This bird's eye view of data shows a three day deployment in the North Pacific. Right: Successive, high resolution profiles of the top 120 m of the water column are combined to create animations of the vehicle's 3D position throughout the course of the vehicle deployment. Each real world vertical profile (10–20 min per profile) is replayed in VR at 2 s per profile. Users can toggle between CTD variables, fly through the scene, and change the spatial scaling in the vertical dimension with the Oculus Touch Controllers.

4.2. Data Visualization—Holographic Microscopy

In our second application we targeted the visualization of holographic microscopy data. As the holographic microscope images 3D volumes in a single camera frame, rapidly and without the use of mechanical lenses (Beers et al., 1970; Jericho et al., 2013), its data seemed well-suited to VR applications at a first glance. However, the technology only allows users to refocus the microscopic image at different distances to the instrument within the 3D volume (Figure 4) and in order to visualize the whole volume in an immersive environment the particles recorded in the hologram first needed to be detected, segmented and extracted.

FIGURE 4
www.frontiersin.org

Figure 4. The custom hologram processing pipeline extracts 2D contours from the imaged 3D volume. (a) A raw 2D hologram. (b) A refocused hologram image at 16,250 μm from the laser source, revealing a Thalassionema type chain-forming diatom. (c) Regions of interest (colored and sorted) derived from the image processing pipeline (d) An assortment of re-focused hologram contours illustrate the variety of marine particle types imaged by the holographic microscope, including diatoms, detritus, and zooplankton.

We therefore developed a custom hologram processing pipeline which first computes a sharpness score for each pixel across all image planes in the whole volume and stores for each pixel the maximum value (Guildenbecher et al., 2012; Ihan et al., 2014). As neighboring pixels in focus are likely to belong to the same object, pixels are grouped to segments in a second step. For each segment the optimal focus distance is computed based on the same sharpness score of the first step, but for the whole region (see Figure 4c). Finally the image is refocused for each segment at the optimal distance and the particle is segmented using the grabcut algorithm (Rother et al., 2004). This resulted in a focused 2D representation (see Figure 4d) for each particle as well as its three-dimensional position within the volume. These 2D silhouettes combined with their 3D position in the microscope imaging volume are well-suited for visualization in VR.

For our application we used a digital inline holographic microscope11; which allows a wide range of marine particle size classes (5–1,000 μm) to be imaged in situ so as to preserve their delicate, undisturbed forms and morphologies. To showcase the use of VR in combination with the acquired data from the instrument, we developed three different VR scenarios.

4.2.1. Phytoplankton Trophy Room

In the Trophy room, a collection of particles from several holographic images are combined to display in a single volume (see Supplementary Movie 2). Visualizing different images in the same space removes information about objects' spatial relationships to each other, but it allows a user to make comparisons of size and shape between the different data sets. The Trophy Room is also well-suited to communicate the abundance and variety of the phytoplankton world to scientist and novices in an engaging and immersing manner.

4.2.2. Phytoplankton Safari

As the holographic microscope can operate autonomously, we mounted it to the ships CTD during a five week cruise onboard the R/V Falkor. This permitted us to record vertical holographic microscope image profiles of the North Pacific alongside the standard suite of physical, chemical, and biological variables (see Supplementary Movie 3). In the “Safari” successive holograms from a CTD cast were then “stacked” on top of each other, providing a phytoplanktons view from a descending CTD rosette down to a maximum depth of 2,000 m. The VR controllers allowed the user to “fly” through the CTD cast. Functionality was added for tagging interesting objects like phytoplankton and marine snow, viewing the “ambient” CTD variables, and measuring spatial distances between interesting hologram features (Figure 5).

FIGURE 5
www.frontiersin.org

Figure 5. Holographic microscope data was processed and rendered in VR at sea. (Left) A virtual reality “Holodeck” was set up inside the CTD control room of the R/V Falkor for viewing holographic microscope data that had been mapped to CTD profiles. Image credit: Schmidt Ocean Institute/Monika Naranjo Gonzalez. Written informed consent was obtained for the publication of this image. (Right) A user's view inside the VR visualization of the holographic microscope data shows how easy it is to tag interesting objects and measure lengths using the Oculus Touch controllers.

4.2.3. Phytoplankton Locomotion

As the holographic microscope is also able to capture a 3D volume at a rate of 16 fps, we developed a holographic movie player (see Supplementary Movie 4). While the recording of movies in the field is not suitable as plankton move in and out of the volume too fast due to chaotic flow patterns, we recorded a holographic movie of a swimming Akashiwo sanguinea in a more quiescent laboratory experiment. In addition to the navigation in the previous examples, a user can also use traditional movie controls like fast-forward, rewind or pause allowing users to follow the motion of the particles not only in the two-dimensional image planes, but to understand their motion in the three-dimensional volume.

4.3. Education—An Interactive Plankton Zoo

Due to the response to the data visualization applications at outreach events from researchers, as well as novices, we decided to develop an educational experience to engage younger audiences to learn about phytoplankton. This led us to create an interactive plankton zoo (Figure 6; see Supplementary Movie 5). We found the ease of creating animated, underwater virtual scenes was greatly increased with the use of popular gaming engine Unity 3D12 by using the Virtual Reality Toolkit13. We used this software suite to integrate 3D plankton models (previously prepared for a video used for outreach) into an underwater scene and provided users with a novel way to interact with the diversity of different phytoplankton types. Participants could use the VR controllers to grab 3D models of floating plankton and then read small descriptions of each organism and experience these morphologies up close (adapted from PACE Phytopia)14.

FIGURE 6
www.frontiersin.org

Figure 6. Five different plankton types and morphologies were rendered in the application, providing users with a novel way to collect and learn about these different types. (Left) A user's view inside the Oculus Rift head-mounted display while capturing a floating phytoplankton. (Right) A user's view inside the Oculus Rift head-mounted display while holding a “3D Chaetoceros” in the Plankton Zoo VR application.

5. Discussion

We successfully tested these VR applications in a CAVE and HMD and found new perspectives on the potential for the use of VR in our future work. In this section we explore these new insights gained and weigh the merits of the invested effort against the results. In extrapolating to future states of this technology, we consider the types of data well-suited for VR, the potential benefits of this novel data interaction style, the benefits to having access to this immersive data exploration style in the field, the new possibilities for remote collaboration, and finally how this impacts communication/education.

Our experience in rendering different data types in VR, be it from autonomous platforms or holographic microscope, was that varying degrees of effort are required to achieve an effective visualization. Autonomous vehicle, CTD, and bathymetric data sets are readily accessible for viewing in VR with minimal processing. Furthermore, toggling between and layering together a diverse set of chemical and physical variables in these environments, from a range of different sensors, required minimal data manipulation and time stamp synchronization. Datasets, such as these are well-suited for VR. In contrast, holographic image data require much greater effort to prepare for VR visualization because the positions of the in-focus objects is unknown prior to the pre-processing steps. Intensive pre-processing was required to visualize regions of interest, and a custom C++ application was created for the final rendering and interaction tools in VR.

There are many ongoing discussions about data standards in oceanography and wider marine science community, especially with respect to open source software, cloud data storage and cloud computing. With regard to VR and data standards, porting some of the standard data APIs (netCDF, HDF, etc.) to VR-friendly environments would improve the workflow from station- and time-series data to VR rendering. Although this is not within the scope of this paper, ongoing and future discussions will have to take into consideration the presence and potential impact that VR will have on facilitating more widespread utilization of public data for numerous applications, including furthering our understanding of complex interconnections within the Earth system as a whole.

The interaction style within each of the applications provides a glimpse into how VR-enabled problem solving environments can aid in discovery in ocean science. During demonstrations at the 2017 American Geophysical Union Ocean Sciences Meeting and at the University of Rhode Island, conversations with other marine scientists led to improved spatial interpolation of Wirewalker data. Immersive VR exploration of Wirewalker data provided a more interconnected view of water mass properties throughout space and time, as we were able to fly through the semi-Lagrangian drift tracks and begin to speculate about the presence of persistent water mass features. In the future, we envision additional parameter visualizations may enhance the ability to identify these water mass properties, including layered visualizations of model output and objective mapping algorithms.

While the autonomous platform visualization enabled a more synoptic, macroscale view of the data, the holographic microscope visualization brought us one step closer to the microscale perspective of the plankton. The power to change camera angle with a tilt of the head enables the user to interact with the virtual plankton as if they were actually floating there in the real world, measuring distances and tagging interesting features for rapid, intuitive exploration. This interaction style minimized the bias of spatial distance within a hologram as compared with 2D renderings, as the VR visualization engine accurately re-scaled object sizes according to its distance from the user's virtual position. This feature is crucial for point-source holographic microscope images, in which particles become magnified the further from the camera they are captured.

In contrast to the holographic microscope visualization, spatial distortions were a necessary feature of the autonomous platform visualization. Vertical lengths were scaled up for improved readability, as the 120 m vertical profiles of the water column were small compared to the tens of kilometers the drifter traveled over several days. This non-uniform axis scaling made it easier to see vertical structure in the water column, while uniform axis scaling gave a better sense of how depth corresponds to the overall scale of the Wirewalker drift track. The ability to manually change this aspect ratio provided a valuable demonstration of scale that is not readily rendered in 2D print graphics.

The relatively compact, portable nature of HMDs make this technology well-suited to take into the field for fast exploration of data sets. With as little as nine square feet of space, a VR system could be setup in the main lab or mission planning area of a research vessel for on-the-fly decision making tasks. Data quality could be monitored as it is being collected, and this could lead to adjustments in the cruise plan or sensor deployment configurations. Our experience aboard the R/V Falkor provided a glimpse into this future, as the HTC Vive HMD was set up adjacent to the ship's CTD monitoring station. While we reviewed freshly recorded holographic microscope profiles just hours after they were recorded, we began to consider the practicality of VR-enabled water-column sampling in which prominent features could be rendered alongside recent CTD casts. This integrated view of the water column might improve interdisciplinary collaboration, as multiple viewpoints could work to identify complementary features.

VR has the potential to transform collaboration at sea and onshore into co-located but remote experiences. All stakeholders could access the same virtual environment, and this may aid in mission planning, task delegation, and policy making. At the University of Rhode Island's Inner Space Center15, teleprescence has become central to remote ocean exploration. We imagine multiple users having the ability to meet in VR, and this could facilitate more meaningful discussion and analysis with participants viewing minimally curated, minimally biased data. For example, the application ConfocalVR16 allows multiple users to interact in a virtual space and this has shown to be beneficial for understanding cellular structure (Stefani et al., 2018). VR-enabled collaboration could also be used for policy making, as has been previously done for coastal management and planning when stakeholders used these visualizations to assess the potential outcome of marine conservation and sustainability projects (Newell et al., 2017). While viewing holographic microscope and Wirewalker data in the Brown CCV YURT, multiple users had access to the same 3D visualization simultaneously without the use of a HMD (see Supplementary Movie 6) and this led to lively conversation and interaction. For students and researchers, VR could enable deeper multi-institution collaboration as well as richer educational experiences.

We found VR to be an engaging educational tool, particularly for younger audiences who were excited and curious to experience the new technology. During outreach events at the Waikiki Aquarium, Brown University, and the University of Rhode Island we found engagement to be lively and feedback to be positive. Underwater environments are a natural fit for VR, as these experiences are often impossible to get to in any other way. For example, conveying a sense of the concentration and relative size of phytoplankton inside a drop of seawater, what it would be like walking on the seafloor, or zooming from the 1km scale of a vehicle dive up to the 100 km scale of the vehicle path, are experiences that only VR can provide. Science education VR apps have become more popular recently (Merchant et al., 2014), and these experiences enable audiences to get closer to the actual data. They have a more personal experience with it while they control the camera angle and play in an open-ended, less constrained way. With the possibility of reaching even broader audiences through online VR app stores like Steam17, we see high potential to recruit the next generation of ocean scientists using VR animations and data visualizations.

With reduced technical barriers to developing software packages, virtual reality is being increasingly applied in ocean science as a tool for scientific exploration, discovery, and education. While mainstream adoption of VR is yet to be realized in ocean science, early adopters will be rewarded by the simple joy of developing and sharing these tools. Virtual reality provides a less curated experience than two dimensional data visualization, allowing users to interact with and interpret data in a manner that is less constrained by the author's perspective, influence, or bias. Although still in the early stages of development, our group's experience with applying VR in ocean science was productive in terms of education, outreach, and exploration. We are hopeful that VR will inspire new, unexpected, and serendipitous observations in ocean science and help bridge the gap between marine observation and data analysis. We have made several of the applications discussed herein available for download, and encourage the reader to experience the potential for VR themselves.

Author Contributions

BK and TS created the hologram processing pipeline and visualization in the Brown CCV YURT. BK created the head-mounted display VR apps and the VR educational demonstration. NW, BK, and MO recorded the hologram, CTD, and Wirewalker data. IC helped with data collection, and concept of the VR educational demonstration. NW and BK drafted the paper with contributions from MO.

Funding

This work was funded by the Rhode Island Science and Technology Advisory Council (Award ID AWD05225), with student support provided through NSF (Award ID AWD05524 and AWD05643) and NASA's PACE mission. For work performed aboard the R/V Falkor, the authors are grateful to Schmidt Ocean Institute for ship time funding, as well as the ships crew for their enthusiasm and support.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmars.2019.00644/full#supplementary-material

Supplementary Movies 1–6. Screen capture movies of the VR applications developed by the authors illustrate several uses of VR in marine science.

Footnotes

References

Beers, J. R., Knox, C., and Strickland, J. D. H. (1970). A permanent record of plankton samples using holography. Limnol. Oceanogr. 15, 967–970.

Google Scholar

Bellarbi, A., Domingues, C., Otmane, S., Benbelkacem, S., and Dinis, A. (2013). “Augmented reality for underwater activities with the use of the DOLPHYN,” in 2013 10th IEEE International Conference on Networking, Sensing and Control (ICNSC) (Evry), 409–412.

Google Scholar

Billen, M. I., Kreylos, O., Hamann, B., Jadamec, M. A., Kellogg, L. H., Staadt, O., et al. (2008). A geoscience perspective on immersive 3d gridded data visualization. Comput. Geosci. 34, 1056–1072. doi: 10.1016/j.cageo.2007.11.009

CrossRef Full Text | Google Scholar

Blom, K., Lindahl, G., and Cruz-Neira, C. (2002). “Multiple active viewers in projection-based immersive environments,” Conference: Immersive Projection Technology Workshop (Ames, IA), 6.

Butail, S., Chicoli, A., and Paley, D. A. (2012). “Putting the fish in the fish tank: Immersive VR for animal behavior experiments,” in 2012 IEEE International Conference on Robotics and Automation (St Paul, MN: IEEE), 5018–5023.

Google Scholar

Candeloro, M., Valle, E., Miyazaki, M. R., Skjetne, R., Ludvigsen, M., and Srensen, A. J. (2015). “HMD as a new tool for telepresence in underwater operations and closed-loop control of ROVs,” in OCEANS 2015–MTS (Washington, DC: IEEE), 1–8.

Google Scholar

Castelvecchi, D. (2016). Low-cost headsets boost virtual realitys lab appeal. Nat. News 533:153. doi: 10.1038/533153a

PubMed Abstract | CrossRef Full Text | Google Scholar

Chapman, P., Wills, D., Brookes, G., and Stevens, P. (1999). Visualizing underwater environments using multifrequency sonar. IEEE Comput. Graph. Appl. 19, 61–65.

Google Scholar

Chen, G., Li, B., Tian, F., Ji, P., and Li, W. (2012). Design and implementation of a 3d ocean virtual reality and visualization engine. J. Ocean Univ. China 11, 481–487. doi: 10.1007/s11802-012-2112-6

CrossRef Full Text | Google Scholar

Chouiten, M., Domingues, C., Didier, J.-Y., Otmane, S., and Mallem, M. (2012). “Distributed mixed reality for remote underwater telerobotics exploration,” in Proceedings of the 2012 Virtual Reality International Conference, VRIC '12 (New York, NY: ACM), 1:1–16.

Google Scholar

Costa, R., Guo, R., and Quarles, J. (2017). “Towards usable underwater virtual reality systems,” in Virtual Reality (VR), 2017 IEEE (San Antonio, TX).

Google Scholar

Cruz-Neira, C., Sandin, D. J., DeFanti, T. A., Kenyon, R. V., and Hart, J. C. (1992). The CAVE: audio visual experience automatic virtual environment. Commun. ACM 35, 64–72. doi: 10.1145/129888.129892

CrossRef Full Text | Google Scholar

Fleischer, S. D., Rock, S. M., and Lee, M. J. (1995). “Underwater vehicle control from a virtual environment interface,” in Proceedings of the 1995 Symposium on Interactive 3D Graphics, I3D '95 (New York, NY: ACM), 25.

Google Scholar

Fröhlich, B., Blach, R., Stefani, O., Hochstrate, J., Bues, M., Hoffmann, J., et al. (2005). “Implementing multi-viewer stereo displays,” in WSCG 2005 Conference Proceedings (Weimar), 8.

Google Scholar

Frohlich, T. (2000). The virtual oceanarium. Commun. ACM 43, 94–94. doi: 10.1145/341852.341868

CrossRef Full Text

Griffiths, H. D., Rafik, T. A., Meng, Z., Cowan, C. F. N., Shafeeu, H., and Anthony, D. K. (1997). Interferometric synthetic aperture sonar for high resolution 3-D mapping of the seabed. Sonar Navigat. IEEE Proc. Radar 144, 96–103. doi: 10.1049/ip-rsn:19971076

CrossRef Full Text | Google Scholar

Guildenbecher, D. R., Gao, J., Reu, P. L., and Chen, J. (2012). “Digital holography reconstruction algorithms to estimate the morphology and depth of nonspherical absorbing particles,” in Interferometry XVI: Techniques and Analysis, Vol. 8493 (Albuquerque, NM: International Society for Optics and Photonics), 849303.

Google Scholar

Hine, B. P., Stoker, C., Sims, M., Rasmussen, D., Fong, T. W., Steele, J., et al. (1994). “The application of telepresence and virtual reality to subsea exploration,” in Conference Paper, The 2nd Workshop on Mobile Robots for Subsea Environments, Proc. ROV'94 (Mountain View, CA), 10.

Google Scholar

Huang, D., Zhao, D., Wei, L., Wang, Z., and Du, Y. (2015). Modeling and analysis in marine big data: advances and challenges. Math. Probl. Eng. 2015, 1–13. doi: 10.1155/2015/384742

CrossRef Full Text | Google Scholar

Ihan, H. A., Doğar, M., and Özcan, M. (2014). Digital holographic microscopy and focusing methods based on image sharpness. J. Microsc. 255, 138–149. doi: 10.1111/jmi.12144

CrossRef Full Text | Google Scholar

Jaffe, J. S., Laxton, B., and Zylinski, S. (2011). “The sub sea holodeck: a 14-megapixel immersive virtual environment for studying cephalopod camouflage behavior,” in OCEANS 2011 IEEE–Spain (Santander: IEEE), 1–6.

Google Scholar

Jericho, S. K., Jericho, M. H., and Kreuzer, H. J. (2013). “Holographic microscopy of marine organisms,” in Imaging Marine Life (Halifax, NS: Wiley-Blackwell), 48–66.

Google Scholar

Josef, N. (2018). Cephalopod experimental projected habitat (CEPH): virtual reality for underwater organisms. Front. Mar. Sci. 5:73. doi: 10.3389/fmars.2018.00073

CrossRef Full Text | Google Scholar

Jung, S., Choi, Y.-S., Choi, J.-S., Koo, B.-K., and Lee, W. H. (2013). “Immersive virtual aquarium with real-walking navigation,” in Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry (Daejeon: ACM), 291–294.

Google Scholar

Kwasnitschka, T., Hansteen, T. H., Devey, C. W., and Kutterolf, S. (2013). Doing fieldwork on the seafloor: photogrammetric techniques to yield 3d visual models from ROV video. Comput. Geosci. 52, 218–226. doi: 10.1016/j.cageo.2012.10.008

CrossRef Full Text | Google Scholar

LaViola, J. J., Prabhat, Forsberg, A. S., Laidlaw, D. H., and Dam, A. V. (2009). “Virtual reality-based interactive scientific visualization environments,” in Trends in Interactive Visualization: State-of-the-Art Survey, Advanced Information and Knowledge Processing, eds R. Liere, T. Adriaansen, and E. Zudilova-Seinstra (London: Springer London), 225–250.

Google Scholar

Lin, C.-R., and Loftin, R. B. (1998). “Application of virtual reality in the interpretation of geoscience data,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST '98 (New York, NY: ACM), 187–194.

Google Scholar

Lin, Q., and Kuo, C. (1998). A virtual environment-based system for the navigation of underwater robots. Vir. Real. 3, 267–277.

Google Scholar

Liu, Y., Qiu, M., Liu, C., and Guo, Z. (2017). Big data challenges in ocean observation: a survey. Pers. Ubiquit. Comput. 21, 55–65. doi: 10.1007/s00779-016-0980-2

CrossRef Full Text | Google Scholar

Lynch, B., and Ellery, A. (2014). Efficient control of an AUV-manipulator system: an application for the exploration of Europa. IEEE J. Ocean. Eng. 39, 552–570. doi: 10.1109/JOE.2013.2271390

CrossRef Full Text | Google Scholar

Marre, G., Holon, F., Luque, S., Boissery, P., and Deter, J. (2019). Monitoring marine habitats with photogrammetry: a cost-effective, accurate, precise and high-resolution reconstruction method. Front. Mar. Sci. 6:276. doi: 10.3389/fmars.2019.00276

CrossRef Full Text | Google Scholar

Massot-Campos, M., Oliver-Codina, G., Massot-Campos, M., and Oliver-Codina, G. (2015). Optical sensors and methods for underwater 3d reconstruction. Sensors 15, 31525–31557. doi: 10.3390/s151229864

PubMed Abstract | CrossRef Full Text | Google Scholar

Matthews, D. (2018). Virtual-reality applications give science a new dimension. Nature 557:127. doi: 10.1038/d41586-018-04997-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students' learning outcomes in K-12 and higher education: a meta-analysis. Comput. Educ. 70, 29–40. doi: 10.1016/j.compedu.2013.07.033

CrossRef Full Text | Google Scholar

NASA Ocean Biology Processing Group (2015). SeaWiFS Level 3 Binned Particulate Organic Carbon Data Version 2014. Greenbelt, MD: NASA Goddard Space Flight Center, Ocean Ecology Laboratory.

Nations, S., Moorhead, R., Gaither, K., Aukstakalnis, S., Vickery, R., Couvillion, W., et al. (1996). “Interactive visualization of ocean circulation models,” in Proceedings of Seventh Annual IEEE Visualization '96 (San Francisco, CA: ACM), 429–432.

Google Scholar

Newell, R., Canessa, R., and Sharma, T. (2017). Visualizing our options for coastal places: exploring realistic immersive geovisualizations as tools for inclusive approaches to coastal planning and management. Front. Mar. Sci. 4:290. doi: 10.3389/fmars.2017.00290

CrossRef Full Text | Google Scholar

Ohno, N., and Kageyama, A. (2007). Scientific visualization of geophysical simulation data by the CAVE VR system with volume rendering. Phys. Earth Planet. Interiors 163, 305–311. doi: 10.1016/j.pepi.2007.02.013

CrossRef Full Text | Google Scholar

Omand, M. M., Cetinić, I., and Lucas, A. J. (2017). Using bio-optics to reveal phytoplankton physiology from a wirewalker autonomous platform. Oceanography 30, 128–131. doi: 10.5670/oceanog.2017.233

CrossRef Full Text | Google Scholar

Oppermann, L., Blum, L., and Shekow, M. (2016). “Playing on AREEF: evaluation of an underwater augmented reality game for kids,” in Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '16 (New York, NY: ACM), 330–340.

Google Scholar

Palmese, M., and Trucco, A. (2008). From 3-D sonar images to augmented reality models for objects buried on the seafloor. IEEE Trans. Instr. Meas. 57, 820–828. doi: 10.1109/TIM.2007.913703

CrossRef Full Text | Google Scholar

Raineault, N. A., Bell, K. L. C., and Girguis, P. (2018). Advancing ocean science and exploration through telepresence. Deep Sea Res. II Top. Stud. Oceanogr. 150, 1–3. doi: 10.1016/j.dsr2.2018.05.008

CrossRef Full Text | Google Scholar

Rainville, L., and Pinkel, R. (2001). Wirewalker: an autonomous wave-powered vertical profiler. J. Atmos. Ocean. Technol. 18, 1048–1051. doi: 10.1175/1520-0426(2001)018<1048:WAAWPV>2.0.CO;2

CrossRef Full Text | Google Scholar

Rautenhaus, M., Kirby, R. M., and Mirzargar, M. (2017). Visualization in meteorology–a survey of techniques and tools for data analysis tasks. in IEEE Trans. Vis. Comput. Graph. 24, 3268–3296. doi: 10.1109/TVCG.2017.2779501

PubMed Abstract | CrossRef Full Text | Google Scholar

Rother, C., Kolmogorov, V., and Blake, A. (2004). ““GrabCut”: interactive foreground extraction using iterated graph cuts,” in ACM SIGGRAPH 2004 Papers, SIGGRAPH '04 (New York, NY: ACM), 309–314.

Google Scholar

Shigematsu, B., and Moriya, N. (1997). Development of a deep-water topological survey system using a laser scanner with a GPS. J. Jpn. Soc. Photogramm. Rem. Sens. 36, 24–34. doi: 10.4287/jsprs.36.5_24

CrossRef Full Text | Google Scholar

Shortis, M., Harvey, E., and Seager, J. (2007). “A review of the status and trends in underwater videometric measurement,” Invited Paper, SPIE Conference, Vol. 6491 (Parkville, VIC), 1–26.

Google Scholar

Sivčev, S., Coleman, J., Omerdić, E., Dooly, G., and Toal, D. (2018). Underwater manipulators: a review. Ocean Eng. 163, 431–450. doi: 10.1016/j.oceaneng.2018.06.018

CrossRef Full Text | Google Scholar

Slater, M., and Wilbur, S. (1997). A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence Teleoper. Virt. Environ. 6, 603–616. doi: 10.1162/pres.1997.6.6.603

CrossRef Full Text | Google Scholar

Stefani, C., Lacy-Hulbert, A., and Skillman, T. (2018). ConfocalVR: immersive visualization for confocal microscopy. J. Mol. Biol. 430, 4028–4035. doi: 10.1016/j.jmb.2018.06.035

PubMed Abstract | CrossRef Full Text | Google Scholar

Stoker, C. R., Burch, D. R., Hine, B. P., and Barry, J. (1995). Antarctic undersea exploration using a robotic submarine with a telepresence user interface. IEEE Exp. 10, 14–23. doi: 10.1109/64.483008

CrossRef Full Text | Google Scholar

Sutherland, I. E. (1968). “A head-mounted three dimensional display,” in Proceedings of the December 9–11, 1968, Fall Joint Computer Conference, Part I, AFIPS '68 (Fall, Part I) (New York, NY: ACM), 757–764.

Google Scholar

Thyng, K. M., Greene, C. A., Hetland, R. D., Zimmerle, H. M., and DiMarco, S. F. (2016). True colors of oceanography: guidelines for effective and accurate colormap selection. Oceanography 29, 9–13. doi: 10.5670/oceanog.2016.66

CrossRef Full Text | Google Scholar

Trivedi, C. A., and Bollmann, J. H. (2013). Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture. Front. Neural Circuits 7:86. doi: 10.3389/fncir.2013.00086

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: virtual reality, VR, oceanography, data visualization, digital holographic microscopy

Citation: Walcutt NL, Knörlein B, Sgouros T, Cetinić I and Omand MM (2019) Virtual Reality and Oceanography: Overview, Applications, and Perspective. Front. Mar. Sci. 6:644. doi: 10.3389/fmars.2019.00644

Received: 15 November 2018; Accepted: 01 October 2019;
Published: 17 October 2019.

Edited by:

Eric Delory, Oceanic Platform of the Canary Islands, Spain

Reviewed by:

Martin Pratt, Washington University in St. Louis, United States
Benoît Pirenne, Ocean Networks Canada, Canada

Copyright © 2019 Walcutt, Knörlein, Sgouros, Cetinić and Omand. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Melissa M. Omand, momand@uri.edu

Download