- 1Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA, USA
- 2Eunate Technology S.L., Sopela, Spain
- 3Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- 4Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
- 5Harvard Medical School, Boston, MA, USA
- 6Isomics Inc., Cambridge, MA, USA
- 7Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA
- 8Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
- 9Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, USA
- 10Department of Radiology, Boston Children's Hospital, Boston, MA, USA
Diagnosis in complex medical disorders as well as imaging research can benefit from cooperative visualization and analysis of the same image volume by more than one physician or researcher at the same time in a session that shares control and events between all parties. The viewing parties often may not be located at the same physical location but are connected via some data network and their geographical separation can span different cities or even different countries. These location constraints together with the need for real-time interactions on the image data between participants calls for the development of the so-called collaborative image visualization systems (CIVS). In medical diagnosis, these types of systems form an important sub-area of Telemedicine (Manssour and Dal Sasso Freitas, 2000). CIVS can be seen as a type of distributed software system that attempts to provide simultaneous visualization of shared image data and automatic synchronization of user-data interactions among users working on physically remote computational entities (desktop computers, mobile devices, etc). This synchronization is difficult to achieve in real time as it usually has to be carried out over the Internet without assuming any specific network topology and latency or a predefined number of connected users. Furthermore, user accessibility and visualization synchronization are both affected by the fact that the remote computational entities can have quite different hardware architectures and operating system platforms. In addition CIVS are expected to provide a user-friendly homogeneous-across-platforms interface and to require minimal user technological skills for their installation and usage.
Several attempts to implement CIVS have been reported over the last two decades. Early solutions mainly ran on UNIX platforms because of the built-in network and security features. These remote visualization instances were interconnected using middleware technologies for distributed systems such as Common Object Request Broker Architecture (CORBA) or Remote Method Invocation (RMI) (Anupam et al., 1994; Forslund et al., 1996, 1998; Coleman et al., 1997). Cross-platform solutions began to appear around the turn of the century using java to implement the client-side software (Manssour and Dal Sasso Freitas, 2000). However, these technologies lack the desirable loose coupling between clients and servers and provide an unnecessary complex application programming interface (API) among other technical and cost issues (Gokhale et al., 2002; Henning, 2006).
Recently some web-based CIVS have been implemented that attempt to achieve real-time synchronization by rendering the image volumes on the server side and sending a representation of the visualization to each collaborator's web browser as a series of 2D images or streaming video (mainly JPEG, PNG, or MPEG formats) (Kaspar et al., 2010, 2013). However, this server-side rendering technique is not suitable for the so-called Fully-Shared-Control real-time CIVS in which all the collaborators have control over the parameters associated with a given interactive visualization (e.g., window leveling of the currently rendered image volume slice) (Manssour and Dal Sasso Freitas, 2000). The main reason for this is that it requires continuously sending relatively heavy data over the network after each single user-data interaction that modifies the visualization parameters. This makes the application not only highly sensitive to user-specific network latency but doesn't scale well when the number of concurrent users increases. Therefore, a distributed client-side rendering approach would be preferable for fast real-time all-users interactivity.
2. Materials and Methods
In addition to image rendering, MedView provides for real-time collaboration and sharing of a common image cursor between all participants in a collaborative session.
2.2. Client-Side Rendering and Visualization
From an application programming perspective, an application like MedView is rather lightweight and most of the application visualization logic and behavior is provided by the viewerjs. This library is in turn reliant on several subcomponents—a low level visualization component (XTK), a collaboration component (gcjs), and a unified file management system (fmjs), see Figure 1.
Figure 1. The main logical components of Medview. The viewerjs library provides most of the services that an application such as MedView might require. Multiple viewers can quickly be constructed on viewerjs (for example a FreeSurfer surface viewer, a tractography viewer, etc). Internally, viewerjs uses low level graphical libraries (XTK and AMI), a real time collaboration library gcjs and a file management library fmjs. Note that the colors in the boxes are for ease of illustration and similarly colored boxes are not functionally related.
The viewerjs library exposes a viewerjs.Viewer class. This class provides methods for easily embedding a neuroimage visualization object (VObj) within an HTML page. The viewerjs.Viewer constructor only requires as an input the Document Object Model (DOM) identifier of the HTML element on which the resultant VObj's HTML interface is inserted. The following code shows the simplicity of the method calls:
var view = new viewerjs.Viewer(divId);
The VObj can asynchronously load more than one neuroimage volume specified by the imgFileArr variable passed to the addData method. The imgFileArr is an array of custom file objects where each object entry has the following properties:
• url: String representing the file's URL/local path (required)
• file: HTML5 File object (optional but necessary when the files are sourced through a local file-picker or drop-zone)
• cloudId: String representing the file cloud identifier (optional but necessary when the files are sourced from a cloud storage service such as Google Drive)
Using the fmjs library, the VObj can load image data from diverse sources such as a remote service using the provided url, a local filesystem using the file property or the Google Drive storage service using the cloudId property. More data can be added to the viewer by repeatedly calling the addData method which creates a new thumbnail bar for each dataset (users can also interactively add more data by dragging files/folders onto the viewer—each drag/drop event will create a new floating thumbnail bar).
Using viewerjs, MedView constructs a graphical user interface (GUI) comprising the main functional components as shown in Figure 2. It contains a tool bar at top with action buttons (using toolbarjs12), a central neuroimage visualization square (provided by rboxjs13) that contains individual interactive visualizers, rendererjs14, and on each side, two floating thumbnail bars (thbarjs15) with an automatically generated snapshot image of the middle slice for each neuroimage volume. Currently the visualization objects rendererjs only provide cross-sectional slice rendering of the 3D datasets. These just use two types of XTK's objects that are closely associated, the X.Volume that contains the 3D volume data and the X.renderer2D that performs the actual rendering and visualization.
Figure 2. The main logical components of Medview. At the top is a toolbar in blue provided by toolbarjs, and on the left and right are floating pink colored thumbnail bars containing the center image of a volume, provided by thbarjs. In the center is a yellow rboxjs container that houses one or more green rendererjs objects that provide image interactivity. An app such as MedView assembles these building blocks as it sees fit.
Up to four thumbnail images can be dragged and dropped from the thumbnail bar into the visualization square for simultaneous visualization of their corresponding volumes. This action also “removes" the volume from the thumbnail bar—closing a volume view returns the volume to its original thumbnail bar. The four-volume display limit is not programmatically imposed, but reflects a design choice to show multiple volumes without visually overwhelming the display. Only those volumes being visualized in the visualization square are kept in memory to reduce the possibility of out-of-memory crashes. Therefore, every time a thumbnail is dropped into the visualization square a new data loading is triggered from either the local filesystem or a remote service according to the location of the volume file. Once data is loaded locally in memory the rendering performs very rapidly as there is no network upload involved. For remote data, the speed of access is unavoidably limited by the network latency. Finally a volume can then be unloaded from the visualization square by dragging and dropping it back into the thumbnail bar. This modern and simple GUI allows users to quickly explore several 3D neuroimage volumes in a very intuitive manner.
2.3. Real-Time Synchronization
The client-side rendering approach adopted in MedView allows for a very responsive desktop-application-like visualization experience. Once a neuroimage volume has been loaded in the visualization square the user can interact with the data by manipulating the visualization through peripheral device controls and immediately sees the results of that interaction (e.g., moving the mouse to point to a different image location or rolling the mouse wheel to navigate across the volume slices by cross-section). The goal of the real-time collaboration is then to provide a way for simultaneous visualization of the same data by several collaborators working on remote computational entities and propagate the results of any user-data interaction to all the collaborators in real time. This requires a mechanism for sharing both the neuroimage data and the visualization state among collaborators.
The real-time collaboration is actually implemented by synchronizing the application data (visualization parameters) among collaborators using the GDrive Realtime Collaborative Data Model (RT-CDM) which is basically a hierarchy of collaborative objects with built-in synchronization among collaborators. When any data is modified in the RT-CDM or new application data is included they are automatically persisted and shared with all the collaborators19. The gcjs.GDriveCollab class provides methods to get and set the RT-CDM and five event listeners that can be dynamically overwritten on its object instances:
1. onConnect called by all connected instances just after a new instance connects to the collaboration session
2. onDataFilesShared called on all connected instances every time the collaboration owner has shared all the data files in its GDrive with a new collaborator
3. onCollabObjChanged called on all connected instances every time the RT-CDM is updated by any remote collaborator
4. onNewChatMessage called by all connected instances everytime a new chat message is received from a remote collaborator
5. onDisconnect called by all connected instances everytime a remote collaborator disconnects
A gcjs.GDriveCollab instance can allow any client-side JavasScript application the ability to participate in a real-time collaboration session through these methods and custom event listeners.
A collaboration session starts when one user clicks the button “Start collaboration” in the VObj's toolbar. A new modal window pops up to let the user decide if she wants to start a new collaboration session as the collaboration owner or instead join an existing collaboration session. Either choice triggers Google's authorization flow so that the user can log into their Google account and authorize the VObj to access its GDrive space. After successful authorization a floating chat window with a collaboration session identifier (id) shows up on top of the VObj's GUI. This id (similar to a chat room id) can then be sent to other users by email or any other on-line messaging system so they can use it to connect to the current collaboration session through their local VObj. The actual neuroimage data files (all the volumes corresponding to the thumbnail images in the thumbnail bar) are uploaded to the collaboration owner's GDrive. However, if any neuroimage volume is comprised of many Digital Imaging and Communications in Medicine (DICOM) files then they are first concatenated into a small number of compressed (zip) files before uploading to GDrive. This is done mainly to reduce the number of required HTTP connections and network bandwidth usage but it is also useful to reduce the number of automatic notification emails received by the other collaborators when these files are shared with them in GDrive. Unlike Slice:Drop, the uploaded files are not publicly shared with the whole Internet. They are only automatically shared with the other authenticated collaborators on demand when they connect to the collaboration session. At that point their VObj instance will then automatically download a copy of the data files from GDrive for their local rendering and visualization.
2.4. Real-Time Implications
The real-time model described here does have some important implications to consider. In order to allow for responsive client behavior, each participating client needs a complete copy of the image data to render locally. The delay in joining a collaborative session is thus a strong function of network bandwidth between the client and the GDrive servers. Relatively long delays may be experienced on slow connections, especially if many (or large) data sets are being shared.
Real-time collaboration is best intended for single image (or volume image) cases and not really multiple image sets concurrently. Moreover, despite the startup delay in sharing multiple image sets, the viewing experience is limited by the memory available to a browser. The operation of the technology may be unworkable in cases where limited memory and/or bandwidth environments exist.
2.5. Development and Build System
We adopted a modular software development strategy that allows for separation of concerns, improves code reusability and facilitates application development and maintenance.
2.6. Comparison with Slice:Drop
MedView can be thought of as a logical successor to our previous work, Slice:Drop (Haehn (2013)). Several key technological differences exist between this work and Slice:Drop. Perhaps most importantly from a software development perspective, Slice:Drop was more of a prototype and less of a fully engineered/designed application. Its internal structure was monolithic and not modularized in terms of functionality, unlike MedView which has a reusable modular library design. For example, in Slice:Drop the data push and pull to the Dropbox servers (for collaboration) is an inherent part of the code and not at all easily extractable for use elsewhere, while in MedView all push/pull is modularized in the reusable gcjs library which can effectively be used by any application.
MedView also uses OAuth 2.0 for its user authentication and authorization management which allows a fine grain of access control to uploaded data. In Slice:Drop files uploaded to Dropbox have no authorization control and are fully publicly accessible.
In terms of collaboration, MedView has an integrated chat client, while in Slice:Drop the chat was an external application—practically in MedView the chat experience feels more integrated into the system. Most importantly, MedView offers a shared cursor among collaborators in their viewers, which is not a feature of Slice:Drop.
Finally, in MedView multiple image volumes can be shared in a collaboration session, unlike in Slice:Drop wherein only one image volume can be shared collaboratively.
Medview is designed as a simple, robust, and multi-device web-app. By simply pointing a browser at the MedView URL29 almost any device can view and interact with most medical image formats. For example, on a Linux host session running Google Chrome in Figure 3, the user opened the standard graphical desktop filebrowser, and navigated to a directory containing medical image files. The parent directory was simply dragged and dropped into the main MedView window. The thumbnail bar on left was generated. In this instance, the image volumes were all NifTI data formats. The volumes were read into the browser, the center slice in the acquisition direction determined, and that slice was rendered in the thumbnail representation. A second directory, itself containing nested sub-directories of DICOM data was also dragged into the browser. This created a new, second thumbnail bar (on right) and again the center DICOM of each series uploaded is shown as representative of that volume. Each action of dragging and dropping from the host's filesystem into the browser, triggers the creation of a new thumbnail bar. The user can drag these bars and position them on either the left or the right of the screen.
Figure 3. Medview on a workstation. A screenshot of a session running on a Linux workstation in the Chrome Browser. The collaboration icon in the toolbar (third from left) is active, and on the bottom left a minimized chat window provides visual indication that this session is currently linked to collaborators.
Note that at time of writing only Google Chrome supports recursive directory processing. In other browsers, such as FireFox, Safari, and Microsoft Edge, actual volume files have to be explicitly selected and dragged/dropped.
Finally, Figure 4 shows a linked collaborative session as seen from an Android tablet running FireFox. Due to the constrained resolution, the main viewer windows are smaller and there is some font interference (which will be addressed in future updates).
Figure 4. Medview on a tablet. A screenshot of a collaborative session, captured on an Android tablet running Firefox. Any changes to the visual state of this session are immediately shared with linked collaborators, and vice-versa.
The order of the thumbnails in each collaborative session might be unique to that session itself, but the main render displays are intimately linked for each participant. While not explicitly shown, a shared pointer-cursor is also available that appears on the same location on all linked images (with the mouse over a specific image volume press the SHIFT key and then move the mouse to the desired location). In this manner, any collaborator can explicitly highlight an exact pixel on a given image and have that information communicated between all linked sessions.
Several limitations to the technology and solution presented in this paper do exist. Firstly, this work is intended for research use only—the security model and code do not purport to be ready for clinical certification. Furthermore, though the solution presented here is completely opensource, we do rely on Google services in the background to provide the “plumbing" that enables the real-time collaboration. This is deemed acceptable, however, due to the ubiquity of Google services and the off-the-shelf leveraging of existing, powerful solutions.
JB: Main coding of MedView. NR: Coding of XTK library. RG: Deployment of medview in clinical context. SP: Design/UI. SM: Deployment of medview in clinical context. RR: Design feedback. PG: UI/UIX. RP: Architecture/lead.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer ZRQ and handling Editor declared their shared affiliation, and the handling Editor states that the process nevertheless met the standards of a fair and objective review.
Part of the work was funded by NIH R01EB014947 “MI2B2 Enabled Pediatric Neuroradiological Decision Support.”
Coleman, J. D., Klement, E., Savchenko, A., and Goettsch, A. (1997). “Teleinvivo: a novel telemedical application for collaborative volume visualization,” in Proceedings of the Fourth ACM International Conference on Multimedia, (Boston, MA: Fetal-Neonate DevCenter), 445–446.
Forslund, D. W., George, J. E., Gavrilov, E. M., Staab, T., Weymouth, T. E., Kotha, S., et al. (1998). “Telemed: development of a java/corba-based virtual electronic medical record,” in Medical Technology Symposium, 1998. Proceedings. Pacific, (IEEE), 16–19. doi: 10.1109/PACMED.1998.767876
Forslund, D. W., Phillips, R. L., Kilman, D. G., and Cook, J. L. (1996). “Telemed: a working distributed virtual patient record system,” in Proceedings of the AMIA Annual Fall Symposium (Washington, DC: American Medical Informatics Association), 990.
Haehn, D., Rannou, N., Ahtam, B., Grant, E., and Pienaar, R. (2014). “Neuroimaging in the browser using the x toolkit,” in Frontiers in Neuroinformatics Conference Abstract: 5th INCF Congress of Neuroinformatics (Munich).
Kaspar, M., Parsad, N. M., and Silverstein, J. C. (2010). “Cowebviz: interactive collaborative sharing of 3d stereoscopic visualization among browsers with no added software,” in Proceedings of the 1st ACM International Health Informatics Symposium (Arlington, VA: ACM), 809–816.
Kaspar, M., Parsad, N. M., and Silverstein, J. C. (2013). An optimized web-based approach for collaborative stereoscopic medical visualization. J. Am. Med. Inform. Assoc. 20, 535–543. doi: 10.1136/amiajnl-2012-001057
Manssour, I. H., and Dal Sasso Freitas, C. M. (2000). “Collaborative visualization in medicine,” in WSCG '2000: Conference proceeding: The 8th International Conference in Central Europe on Computers Graphics, Visualization and Interactive Digital Media '2000 in cooperation with EUROGRAPHICS and IFIP WG 5.10: University of West Bohemia (Plzen), 266–273.
Millan, J., and Yunda, L. (2014). An open-access web-based medical image atlas for collaborative medical image sharing, processing, web semantic searching and analysis with uses in medical training, research and second opinion of cases. Nova 12, 143–150.
Pienaar, R., Rannou, N., Bernal, J., Hahn, D., and Grant, P. E. (2015). “ChRIS – A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data,” Conference proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference (Milan, IL).
Sherif, T., Kassis, N., Rousseau, M.-É., Adalat, R., and Evans, A. C. (2015). Brainbrowser: distributed, web-based neurological data visualization. Front. Neuroinform. 8:89. doi: 10.3389/fninf.2014.00089
Sherif, T., Rioux, P., Rousseau, M.-E., Kassis, N., Beck, N., Adalat, R., et al. (2014). Cbrain: a web-based, distributed computing platform for collaborative neuroimaging research. Front. Neuroinform. 8:54. doi: 10.3389/fninf.2014.00054
Wood, D., King, M., Landis, D., Courtney, W., Wang, R., Kelly, R., et al. (2014). Harnessing modern web application technology to create intuitive and efficient data visualization and sharing tools. Front. Neuroinform. 8:71. doi: 10.3389/fninf.2014.00071
Keywords: collaborative visualization, interactive visualization, real-time collaboration, neuroimaging, HTML5, web services, telemedicine, Google Drive
Received: 03 January 2017; Accepted: 13 April 2017;
Published: 01 May 2017.
Edited by:Richard A. Baldock, University of Edinburgh, UK
Copyright © 2017 Bernal-Rusiel, Rannou, Gollub, Pieper, Murphy, Robertson, Grant and Pienaar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jorge L. Bernal-Rusiel, email@example.com