Skip to main content

ORIGINAL RESEARCH article

Front. Virtual Real., 28 October 2021
Sec. Virtual Reality in Medicine
Volume 2 - 2021 | https://doi.org/10.3389/frvir.2021.692641

Application of Mixed Reality in Medical Training and Surgical Planning Focused on Minimally Invasive Surgery

  • 1Bioengineering and Health Technologies Unit, Jesús Usón Minimally Invasive Surgery Centre, Cáceres, Spain
  • 2TREMIRS Project, Jesús Usón Minimally Invasive Surgery Centre, Cáceres, Spain
  • 3Thoracic Surgery Unit, Cáceres University Hospital, Cáceres, Spain
  • 4Scientific Direction, Jesús Usón Minimally Invasive Surgery Centre, Cáceres, Spain

Introduction: Medical training is a long and demanding process, in which the first stages are usually based on two-dimensional, static, and unrealistic content. Conversely, advances in preoperative imaging have made it an essential part of any successful surgical procedure. However, access to this information often requires the support of an assistant and may compromise sterility in the surgical process. Herein, we present two solutions based on mixed reality that aim to improve both training and planning in minimally invasive surgery.

Materials and Methods: Applications were developed for the use of the Microsoft HoloLens device. The urology training application provided access to a variety of anatomical and surgical training contents. Expert urological surgeons completed a questionnaire to evaluate its use. The surgical planning solution was used during laparoscopic renal tumorectomy in an experimental model and video-assisted right upper lobectomy in an adult patient. Surgeons reported their experience using this preoperative planning tool for surgery.

Results: The solution developed for medical training was considered a useful tool for training in urological anatomy, facilitating the translation of this knowledge to clinical practice. Regarding the solution developed for surgical planning, it allowed surgeons to access the patient’s clinical information in real-time, such as preoperative imaging studies, three-dimensional surgical planning models, or medical history, facilitating the surgical approach. The surgeon’s view through the mixed reality device was shared with the rest of the surgical team.

Conclusions: The mixed reality-based solution for medical training facilitates the transfer of knowledge into clinical practice. The preoperative planning tool for surgery provides real-time access to essential patient information without losing the sterility of the surgical field. However, further studies are needed to comprehensively validate its clinical application.

Introduction

Medical education is a long and demanding process that requires extensive theoretical knowledge, along with technical and non-technical skills. During the early stages of medical education, training methods are often based on static and non-realistic learning content. Currently, these methods are being replaced by new approaches based on the use of information and communication technologies (Langridge et al., 2018; Williams et al., 2020). Apprenticeship models in surgical training have rapidly evolved from traditional approaches based on an educational philosophy following the principle of “see one, do one, teach one” to more sophisticated surgical simulators aimed at increasing the number of simulations following the “see one, simulate many deliberately, do one” philosophy (Kerr and O'Leary, 1999; Scott et al., 2008), thus allowing a dramatic increase in the skills of medical professionals and the safety of patients (Viglialoro et al., 2021). There are some strategies for surgical training based on serious video games (Rosenberg et al., 2005; Goris et al., 2014), animal models (Daly et al., 2014; DeMasi et al., 2016), and cadavers (Jacobson et al., 2009; Zuckerman et al., 2009; Rocha e Silva et al., 2016). However, due to the economic and ethical issues involved in some of these solutions, surgical training has rapidly shifted toward the use of simulation-based systems (Forgione and Guraya, 2017).

Advances in preoperative imaging have allowed for its extensive application in surgical planning, which has thus become an essential part of any successful surgical procedure (Sánchez-Margallo et al., 2015). Specifically, when facing complex surgeries, surgical planning provides valuable information for predicting and reducing any potential risks during surgery, thereby improving its safety levels. However, preoperative imaging systems are often located outside the operating room (OR) and, thus, need to be accessed outside the surgical area, or their operation requires the help of an assistant. In addition, the devices available in the OR for surgical planning may entail the loss of sterility, mainly due to the manipulation of touch screens, keyboards, and other computer equipment. In this regard, new technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR) have the potential to provide medical students with interactive and realistic training systems; furthermore, they can be valuable tools for surgeons to facilitate the planning of surgical interventions (Sadeghi et al., 2020).

Medical visualisations have already been widely exploited for supporting diagnosis in the form of X-rays, computed tomography (CT), and magnetic resonance imaging (MRI) scans (Smith et al., 2020). The use of three-dimensional (3D) representations of these data in immersive settings provides new ways to explore the data and to further enhance the tools available to medical professionals in several areas including medical training, surgical planning, and intraoperative guidance. This evolution is even more evident in the case of minimally invasive surgery (MIS), which often lacks adequate access to the patient’s anatomy (Sánchez-Margallo et al., 2018a). In this context, technological advances have radically changed surgical training and planning (Lahanas et al., 2015; Jayender et al., 2018; Li et al., 2020; Sánchez-Margallo et al., 2021).

In surgical training, most simulation-based approaches have focused on traditional VR and AR technologies, which offer different degrees of immersive experience but are generally unable to interact with 3D information combined with the real-world environment. Recently, MR techniques have replaced these traditional technologies intending to combine the real working environment with virtual content so that users can interact with both simultaneously. MR surgical simulators and medical training applications are becoming an important part of the training process for physicians, as they allow for a training environment appropriate for recreating realistic and reproducible scenarios without putting the patient at risk (Sánchez-Margallo et al., 2018b; Sappenfield et al., 2018; Amparore et al., 2021).

The use of 3D models to estimate the size and shape before performing the surgical procedure has been effectively implemented for almost a decade (Hurson et al., 2007). The irruption of MR techniques can make an important difference in this field. This technique can generate personalised 3D models for each patient and visualise the internal anatomy in a fully immersive environment. This opens up new possibilities, such as preoperative simulations, to determine optimal procedures and to predict the final surgical outcomes. MR technology has already been successfully applied as a planning tool in different surgical scenarios, including urology (Li et al., 2020), thoracic surgery (Perkins et al., 2020), neurosurgery, colorectal, and bariatric surgery (Cartucho et al., 2020). These solutions allow for the inclusion of elaborate information such as holographic images or 3D objects that can be placed within the surgeon’s field of view, thus avoiding the need to use alternative displays in the OR and facilitating a more precise alignment between virtual information and physical objects. This would reduce the need of awkward postures for the surgeon and provide new interactive experiences in surgical planning (Hu et al., 2019).

In the field of surgical assistance, the use of MR wearable devices such as the HoloLens (Microsoft; Redmond, Washington, United States), in combination with new emerging imaging technologies, can benefit the surgical process, especially in complex procedures. This technology facilitates the spatial localisation of anatomical structures and improves mental alignment, which simplifies preoperative planning (Lee et al., 2017). This technology has already been evaluated as an assistance tool during endoscopic procedures (Al Janabi et al., 2020), spine surgery (Liu et al., 2020), interventional radiology procedures (Deib et al., 2018; Heinrich et al., 2019), and orthopaedic surgery (Gregory et al., 2018). In the latter, this technology was tested using the Holoportal MR application (TeraRecon; Durham, NC), as a proof of concept, in a real surgical environment during the implantation of a shoulder prosthesis (Gregory et al., 2018).

The main objective of this study was to describe and test a set of innovative MR-based solutions for the improvement of both training and planning in MIS. The proposed solutions will allow the use of new and more realistic scenarios for medical training, as well as access to different sources of preoperative patient information, to support the planning of surgical procedures. The software solution developed for medical training focused on urology. Surgical planning solutions have been tested in two different surgical scenarios, namely laparoscopic renal tumorectomy and video-assisted lobectomy.

Materials and Methods

Two MR-based applications for MIS training and planning were developed and evaluated in this study. The information was displayed through interactive holograms controlled by hand gestures or voice commands. The view cursor (similar to mouse pointer) was controlled by the gaze of the user and the interactions were triggered by the gazed holograms followed by hand gestures or voice commands. The two main hand gestures integrated in both applications were “air-tap” (raise the index finger in front of the field of view and then tap by flexing the index finger down. Similar to mouse click) to interact with user interface buttons and “air-tap and hold gesture” (similar to holding down the mouse button while dragging it) to scale, position and rotate the 3D holograms. These gestures allow the surgeon to show or hide information, interact with the 3D models of the training application and navigate between the different axes of the preoperative studies of the surgical planning application, among other actions.

These applications allowed viewing the contents from the HoloLens glasses themselves (one user) or sharing their experiences with other devices via streaming (several users). They fostered communication between the surgical team and the transmission of knowledge to other people in real-time.

The first generation of HoloLens was used as the MR device. This wearable headset combines several types of sensors (an inertial measurement unit, four environmental understanding cameras, one depth camera, one high-definition video camera, four microphones, and an ambient light sensor) along with an Intel 32-bit architecture processor (Intel Corporation; Santa Clara, CA, United States) and a custom-built Microsoft holographic processing unit. The weight of the device is 579 g and the battery durability can reach 5.5 h. HoloLens creates visual information using the reflection of two high-definition 16:9 light engines onto each retina of the user (offering interpupillary automatic calibration), which does not interfere with the visual information of the surrounding environment.

Mixed Reality Framework

Unity (Unity Technologies, San Francisco, CA, United States) was selected as the development platform because it allows easy interaction with visual elements (both 2D or 3D) and integrates different plugins and libraries that greatly facilitate the implementation, allowing for the porting of applications to most extended platforms. Each application considered one or more scenes, which in turn were made up of objects structured in the form of a parent-child hierarchy. Each of the objects had a “Transform” component, a script that controls its position, rotation, and scale, fostering the possibility of adding many other different components even in the form of user-programmed scripts. Microsoft Visual Studio was adopted as the programming environment, with C# as the programming language, to deploy the applications on the device.

The applications were developed using the Mixed Reality Toolkit development kit for Unity, which has become the standard for developing any MR application with HoloLens devices. This has been used in many medical applications for rhinoplasty (Maasthi et al., 2020), arthroplasty training (Turini et al., 2018), and open abdominal surgery (Galati et al., 2020). Loading of preoperative imaging studies with Digital Imaging and Communication on Medicine (DICOM) format was performed using a customised version of the Fellow Oak DICOM toolkit (Al-Zu’bi et al., 2017).

The two MR-based applications were implemented using Microsoft Windows 10 operating system. The final products were two Universal Window Platform applications (Microsoft) implemented on Intel’s x86 architecture (Intel Corporation).

Training in Urology

Medical training applications focused on the human pelvis. Specifically, a 3D anatomical model was developed based on the CT study of a real patient. The model included interactive information on the different anatomical systems of the pelvic cavity (vascular, nervous, muscular, bone, digestive, urinary, and reproductive systems), as shown in Figures 1A,B.

FIGURE 1
www.frontiersin.org

FIGURE 1. (A) General interface of the training application with a 3D interactive anatomical model of the human pelvis. (B) Detail of the different systems available in the human pelvis model (from left to right): muscular, skeletal, vascular, nervous, renal, and reproductive systems.

The developed MR solution allowed the visualisation of the 3D models, manipulating them spatially (scale, rotation, and translation), and being able to activate or deactivate them independently (Figure 1A). In addition, the user received real-time information about the anatomical element being pointed at or visualised, highlighting the anatomical structure and displaying its name and other relevant information next to it. Voice commands were available to show the different anatomical systems by saying “show/hide” followed by “muscular/bone/vascular/nervous/renal/reproductive” and ended by “system”.

In addition to the interactive visualisation of the 3D anatomical model, the application allowed the holographic visualisation of videos about related surgical techniques (laparoscopic nephrectomy, prostatectomy, etc.), preoperative imaging studies with and without pathologies, and medical illustrations.

This new medical training solution was tested by expert urologists who attended a training activity at the Jesús Usón Minimally Invasive Surgery Centre (Cáceres, Spain). At the beginning of the session, participants received a brief explanation of the gestural and voice interaction methods, including different voice commands for the MR device. Next, participants were invited to interact with the functionalities of the interactive 3D anatomical model of the human pelvis and all its associated systems, reference videos of related surgical procedures, and medical illustrations. They used both gestural interactions and voice commands. To evaluate the user experience with the application and the use of the MR glasses, they completed a personalised questionnaire at the end of the session (Table 1). Each item is rated on a 5-point Likert scale. In addition, they were provided with space to indicate any additional comments.

TABLE 1
www.frontiersin.org

TABLE 1. Set of subjective parameters regarding the surgeon’s experience with the urology training application developed for HoloLens.

Surgical Planning in Minimally Invasive Surgery

The general functionalities of this application included visualisation and interaction with preoperative imaging studies of the patient (CT or MRI studies), as shown in Figure 2A, and interaction with 3D anatomical models of the patient, generated from the preoperative studies (Figure 2B). In addition, it allowed for the visualisation of medical illustrations regarding the anatomical structures to be addressed during surgery (Figure 2C) and videos/tutorials regarding similar MIS procedures (Figure 2E), as well as providing access to the patient’s medical history in situ (Figure 2D). All content was displayed in the form of holograms that the medical professional could move at will and position at the most appropriate location in the surgical work environment. Additionally, voice commands were available to show or hide the different tools by saying “show/hide” followed by “preoperative study/three-dimensional model/clinical history/medical illustration/surgical video”.

FIGURE 2
www.frontiersin.org

FIGURE 2. Assistance content for surgical planning: (A) views of the preoperative imaging study, (B) 3D anatomical model of the patient, (C) medical illustration of the anatomy to be addressed during surgery, (D) medical history of the patient, and (C) reference video of the surgical procedure.

As a first step, the application loaded preoperative imaging studies (following the DICOM standard) from a local or remote file location. Each image view (axial, coronal, and sagittal) was displayed on an individual panel (Figure 2A), which allowed the user to navigate (forward or backward) within the set of available slices. These preoperative imaging studies have also been used to create 3D anatomical models. For this purpose, 3D Slicer (www.slicer.org), an open-source software package for medical image analysis, was used. The user can adjust the position, rotation, and scale of the 3D model to facilitate its visualisation. Interactions with the holograms were possible using gestures or voice control.

This application was tested during laparoscopic renal tumorectomy in a porcine model. This study was conducted in the experimental operating rooms of the Jesús Usón Minimally Invasive Surgery Centre in Cáceres (Spain) and was approved by the local animal welfare and ethics committee. Prior to surgery, an artificial renal tumour model was developed using a mixture of alginate and saline. Subsequently, a CT scan of the animal was obtained, and a 3D anatomical model from the preoperative study was created.

Finally, the MR application was used as a tool to assist in surgical planning during video-assisted right upper lobectomy, including systematic lymphadenectomy for squamous cell carcinoma in the right upper lobe. This procedure was performed at Cáceres University Hospital (Spain).

Results

Training in Urology

A group of six surgeons, experienced in urology (>100 laparoscopic procedures performed), evaluated this application. All of them interacted with the different functionalities of the application: a 3D anatomical model of the human pelvis and its different anatomical systems (Figures 3A,B,D), reference videos of related surgical procedures, and medical illustrations (Figure 3C).

FIGURE 3
www.frontiersin.org

FIGURE 3. Interaction with 3D anatomical models: the human pelvis (A,D) and uterus (B). Visualisation of reference videos of related surgical procedures (C).

The surgeons found the MR solution to be a very useful tool for learning and studying the human pelvic anatomy and its application in urological surgeries, both individually and in groups (through broadcasting on external screens). They stated that the application facilitates the transfer of theoretical knowledge to actual practice and that this technology can potentially be useful for surgical planning and assistance during MIS.

Intuitively, the interactivity with preoperative imaging studies and the clarity and organisation of the 3D anatomical models were the most highly rated aspects by the surgeons (Figure 4). In contrast, the comfort of wearing the glasses obtained the lowest score.

FIGURE 4
www.frontiersin.org

FIGURE 4. Results of the subjective questionnaire regarding the usefulness and functionalities of the application and ergonomics of the MR device. Results are presented as mean values and standard deviations (shaded area around the main plot).

Surgical Planning in Minimally Invasive Surgery

Two experienced laparoscopic surgeons (>100 laparoscopic procedures performed) tested the MR surgical planning application during laparoscopic renal tumorectomy in a porcine model. They were able to interact in the experimental OR with different views of the preoperative study (CT scan) and thus identify the lesion to be addressed during surgery (Figure 5A). In addition, interaction with the 3D anatomical model of the animal made it easier for the surgeons to plan the different phases of the surgical procedure, mainly in aspects related to the localisation of the renal artery and the planning of the tumour resection area. As support material for the surgical planning, the surgeons also had access to reference videos of the procedure to be performed and medical illustrations of the porcine anatomy (along with the different steps to be carried out during the renal tumorectomy).

FIGURE 5
www.frontiersin.org

FIGURE 5. (A) Interaction with preoperative study views and their corresponding 3D model for planning the renal tumorectomy in the porcine model. (B) Axial, coronal, and sagittal views of the preoperative study for planning the video-assisted lobectomy. (C) Interaction with the patient’s anatomical 3D model with information on the vascular structures and the tumor to be addressed during the surgical procedure. The surgeon placed the model according to the patient’s position (D) An additional screen shared the surgical planning assistance contents with the rest of the surgical team, such as the CT study and a reference video of the surgical procedure to be performed.

Regarding the use of MR application for surgical planning in video-assisted right upper lobectomy, no complications were observed during surgery. Prior to surgery, the system allowed the surgeon to access the patient’s medical history in situ and in real-time and to review the patients preoperative study (CT scan). The system also allowed the surgeon to readily visualise and manipulate a 3D model of the lung, with its respective vascular and bronchial elements, as well as the tumour to be addressed (Figures 5B,C).

The surgeon’s view through the device was shared with the rest of the surgical team (Figure 5D). The surgeon placed the holographic models (with surgical planning information) behind the field of view of the operating table for possible consultation during the surgical procedure. As in the previous case, the surgeon reported some ergonomic aspects to be improved with regard to the MR device, such as the weight (579 g) and heat generated in the front side during its use.

Discussion

In this study, we presented two applications based on mixed MR, oriented to surgical training and planning in MIS. The information was displayed using interactive holograms that were controlled by the user through hand gestures or voice commands. The contents could be viewed from the glasses themselves (one user) and could be shared with others via streaming, encouraging the exchange of information. Both applications allowed the user to choose the content to be displayed so that, once developed, they could be fed with specific training content or surgical planning content specific to each type of surgery. Some of the features shown in this study were foreseen in previous studies as promising applications of MR for surgical assistance (Gregory et al., 2018).

An important feature to consider in MR devices is that they overlap digital content with the real world. As a result, it is highly important to optimise the application so that the frame rate per second is as stable and as high as possible. Specifically, it is advisable to have a frame rate above 30 frames per second to avoid discomfort, nausea, or dizziness (Louis et al., 2019). These issues are not as crucial as in VR devices, but it is recommended to maintain these precautions for an optimal user experience when using MR applications.

During the development of the various modules that integrate the presented MR solutions, some aspects must be highlighted for further applications. For the visualisation module of the three views (axial, coronal, and sagittal) of the preoperative study, the content to be displayed did not present a high computational cost for the MR device. Although it internally processed the volumetric point cloud of the DICOM file, it simply rendered three planes, which did not increase the frame rate.

Regarding the module for generating the 3D model from the preoperative imaging study, it allowed the scaling, rotation, and positioning of the model to the user’s preference for a better perception of the patient’s anatomy. The generation of this content involved a certain computational cost depending on the model; therefore, caution should be exercised when segmenting the anatomical areas of interest, as well as in the subsequent reduction of polygons of the resulting mesh. In the cases described in this study, it was not necessary to reduce the mesh of the 3D model obtained. However, it is suggested to use standard materials provided by Microsoft’s Mixed Reality ToolKit framework.

For the visualisation of reference surgical videos, it would be possible to temporarily label them and separate their content into chapters, thus allowing easier access to the different steps involved in the surgery. The maximum resolution for viewing videos on the device was 1,280 × 720 pixels. Therefore, although the application allowed videos to be played at higher resolutions, it was recommended to insert videos in this resolution to avoid overloading the performance and to maintain the desired frame rate (30 FPS).

Regarding the findings obtained from the experience of users with the developed MR application for training in urology, there were three surgeons who experienced a steep learning curve regarding the interaction with the MR device. This can be seen in items 2 and 3 of the subjective evaluation questionnaires (Figure 4). This type of technology introduces new concepts and methods of interaction for users that require some familiarization time (Hurson et al., 2007; Maasthi et al., 2020). The second generation of the HoloLens glasses (HoloLens v2) could help solving this issue due to its advanced features to enhance user interaction. Another aspect of interaction that is challenging for users is the use of voice commands (Figure 4, item 3). Since the commands have been implemented in English language, in order to facilitate the universality of the applications, it could lead to some complications for non-native English users (Hurson et al., 2007). As for the ergonomic aspects of the device, both the users of the training application (Figure 4, item 1) and those of the surgical planning application considered that this is a feature that needs to be improved (Turini et al., 2018). The mixed reality device is still relatively uncomfortable to wear, especially when used for a prolonged period of time, mainly due to its weight and the heat it may cause on the user’s forehead. As future work, most surgeons proposed the extension of training models include additional anatomical structures such as the prostate. This would improve anatomical training and preparation for surgeries, such as laparoscopic prostatectomy. Some users experienced a steeper learning curve concerning the interaction with the MR device. The second generation of HoloLens glasses (HoloLens v2) could help solve this issue because of its advanced features to enhance user intuitiveness.

Few applications have been found for MIS training using MR technology. A study by Amparore et al. compared 3D virtual reconstruction with 3D printing of organs, such as the kidney and prostate, to determine which method was more suitable for visualisation and localisation of tumour lesions (Amparore et al., 2021). They concluded that MR is the preferred choice for surgical training and planning, with HoloLens MR glasses being considered the most adequate technology for surgical planning.

The MR solutions for surgical planning presented in this study were tested during two different MIS procedures, in which surgeons provided feedback on their experiences. This will allow us to make necessary improvements to enhance interaction and user experience in future applications. No complications were reported in either surgery group. In both cases, the MR solutions allowed navigation over the CT studies, as well as the visualisation of real 3D models of the patient’s anatomy. In the porcine model, the renal anatomy was shown together with the artificial tumour to be excised. In the adult patient, the lung anatomy was shown in combination with the vascular system, bronchi, and the tumour to be treated. The surgeon also had access to reference surgical videos, as well as different documents with the patient’s clinical history and reference anatomical illustrations. The surgeon’s vision, together with the information in the form of holograms, was shared on the screens of the OR via streaming. It should be noted that this video streaming suffered a slight time delay of approximately one second during the entire retransmission using the Microsoft Windows Device Portal software.

The streaming option of our tool allowed all personnel inside and outside the OR to directly see what the surgeon saw. This feature was also reported by Gregory et al. during surgery for the implantation of a shoulder prosthesis (Gregory et al., 2018). This tool can also be used in videoconferences during live surgeries as a method of immersion in the surgery. Another feature to point out is the possibility of recording the surgery from the surgeon’s perspective. In the event of a complication during surgery, this would allow the surgeon to access the recording and review it in more detail.

As it has also been indicated for the training application, surgeons have reported some ergonomic aspects of MR devices that should be improved. Although it did not cause significant discomfort, they stated that the device (HoloLens v1) has a weight that can be uncomfortable if worn for several hours. The new model of this device (HoloLens v2) already has a lighter design. Additionally, surgeons indicated that the device generated heat in the forehead area. This can be solved by increasing the separation.

The most complete MR-based surgical planning solution found in scientific publications offers information in the form of interactive holograms of both a 3D model and images of the different MRI/CT views and even a component to display intraoperative information (e.g., intraoperative ultrasound) (Cartucho et al., 2020). However, this application was not used during any actual surgery as a surgical planning tool, but only a pilot study with a phantom was used to collect data through a survey. Other MR solutions (with less functionality) were used retrospectively as surgical planning tools in patients undergoing thoracic surgery (Perkins et al., 2020). This application allowed the visualisation of only one of the three views of the preoperative imaging study, as well as the manipulation of 3D models obtained from it. They used a simulation of lung motion by animating the 3D model, which facilitated the estimation of the tumour location. However, it does not allow displaying multiple views of preoperative imaging studies or other additional information to support surgical planning, such as the patient’s medical history, medical illustrations, or videos of similar surgical interventions.

The largest study on the use of MR applications in assistance during laparoscopic surgeries has been in a comparative study of 50 laparoscopic nephrectomies with MR assistance versus 50 similar surgeries without it (Li et al., 2020). The results concluded that MR technology can improve the success rate in laparoscopic surgeries, as well as offer added value in clinical applications such as planning, navigation, consultation, teaching, and patient communication.

Other solutions made use of MR as a substitute for conventional screens in the OR, capturing the endoscopic video directly on the device in the form of a hologram, thus allowing the surgeon to act in a more comfortable position during surgery (Deib et al., 2018; Al Janabi et al., 2020). Some MR clinical applications seek to spatially reference 3D holograms on anatomical elements, thus being able to overlap virtual information with reality (Heinrich et al., 2019; Liu et al., 2020).

To the best of our knowledge, the present MR-based surgical planning solution is the first to be applied during video-assisted lobectomy. It is important to note the novelty of the inclusion of hologram visualisation of the volumetric point cloud of the 3D surgical planning model. The application was iteratively refined after its evaluation in an experimental model used by different surgeons, optimising the interaction and usability.

This study has some limitations to be taken into account for further research, such as the few cases in which the developed solutions have been applied, as well as the limited number of surgeons who have been able to test them. As reflected in the results, the learning curve of this technology is an aspect to be considered, as these MR devices are not common in the day-to-day work of surgeons. Although the method of interaction is optimal and allows the surgeon to maintain sterile conditions (since there is no real contact with the elements), the lack of tangible hardware devices to interact with, such as a computer mouse, joystick, or tablet, requires a more pronounced adaptation process.

Several future studies are required to improve the proposed solutions. One of our main objectives is to optimise the visualisation performance of volumetric point clouds in 3D models. To achieve this, different possible solutions will be analysed to improve the visualisation of the DICOM files in real-time and the performance of the MR device itself. In contrast, we propose the development of a customised method for retransmission of the surgeon’s view together with holograms via streaming to overcome the latencies presented by the current method. This solution could be the first step toward using MR glasses as a monitor for the laparoscopic camera with a real-time video feed, improving ergonomics for the surgeon during surgery. Additionally, the solutions presented will be adapted for use with the HoloLens v2, so that its eye-tracking system can be used for interaction with holographic models. This allows the user to provide direct feedback about the element he/she is looking at. Similarly, these data can be analysed for the generation of heat maps with the areas most consulted by medical professionals compared to those consulted by medical students and residents. Once we have a final version of the applications for training and surgical planning in MIS, incorporating all the improvements and feedback obtained in this study, several specific aspects related to user experience could be validated. The mental and physical workload of users with regard to the use of these applications could be determined using a NASA-TLX (Task Load Index) questionnaire (Turini et al., 2018). Similarly, the ultimate system usability or user’s interest/enjoyment could be analyzed by means of the System Usability Scale (SUS) (Gregory et al., 2018) or the Intrinsic Motivation Inventory (IMI) scale (Galati et al., 2020), respectively.

Conclusion

The MR-based solution for surgical training presented in this study is a useful tool for urological anatomy training, facilitating the transfer of this knowledge to actual clinical practice. The solution developed for assistance during surgical planning provides real-time access to essential patient information, such as preoperative imaging studies, the 3D surgical planning model, or the clinical history, without losing the sterility of the surgical act. This tool has been successfully tested during laparoscopic tumorectomy in an experimental model and video-assisted lobectomy. The surgeon’s view can be shared for communication and learning purposes, as well as for a later review of possible surgical complications. However, further studies are needed to validate its clinical application comprehensively.

Abbreviations

MIS, minimally invasive surgery; OR, operating room; VR, virtual reality; AR, augmented reality; MR, mixed realityl; 3D, three-dimensional; CT, computer tomography; MRI, magnetic resonance imaging; DICOM, digital imaging and communication on medicine.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.

Ethics Statement

The animal study was reviewed and approved by the competent local Animal Welfare and Ethics Committee.

Author Contributions

JS, CP, RF, and FS: Conceptualization, Methodology. CPM: Software. JS, CP, and RF: Investigation. JS, CP: Writing—Original Draft. RF, FS: Writing—Review ; Editing. FS: Supervision, Funding acquisition. All authors contributed to manuscript revision, read, and approved the submitted version.

Funding

This work has been partially funded by the Spanish Ministry of Science and Innovation, the European Regional Development Fund (FEDER) “A way to make Europe” and the Junta de Extremadura (Spain) (TA18023, GR18199, CPI-2019-33-1-TRE-14).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

MIS, minimally invasive surgery; OR, operating room; VR, virtual reality; AR, augmented reality; MR, mixed realityl; 3D, three-dimensional; CT, computer tomography; MRI, magnetic resonance imaging; DICOM, digital imaging and communication on medicine.

References

Al Janabi, H. F., Aydin, A., Palaneer, S., Macchione, N., Al-Jabir, A., Khan, M. S., et al. (2020). Effectiveness of the HoloLens Mixed-Reality Headset in Minimally Invasive Surgery: a Simulation-Based Feasibility Study. Surg. Endosc. 34 (3), 1143–1149. doi:10.1007/s00464-019-06862-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Al-Zu’bi, S., Al-Ayyoub, M., Jararweh, Y., and Shehab, M. A. (2017). Enhanced 3D Segmentation Techniques for Reconstructed 3D Medical Volumes: Robust and Accurate Intelligent System. Proced. Comp. Sci. 113, 531–538. doi:10.1016/j.procs.2017.08.318

CrossRef Full Text | Google Scholar

Amparore, D., Pecoraro, A., Checcucci, E., De Cillis, S., Piramide, F., Volpi, G., et al. (2021). 3D Imaging Technologies in Minimally-Invasive Kidney and Prostate Cancer Surgery: Which Is the Urologists' Perception. Minerva Urol. Nephrol. 26 [Online ahead of print]. doi:10.23736/S2724-6051.21.04131-X

CrossRef Full Text | Google Scholar

Cartucho, J., Shapira, D., Ashrafian, H., and Giannarou, S. (2020). Multimodal Mixed Reality Visualisation for Intraoperative Surgical Guidance. Int. J. CARS 15, 819–826. doi:10.1007/s11548-020-02165-4

CrossRef Full Text | Google Scholar

Daly, S. C., Wilson, N. A., Rinewalt, D. E., Bines, S. D., Luu, M. B., and Myers, J. A. (2014). A Subjective Assessment of Medical Student Perceptions on Animal Models in Medical Education. J. Surg. Educ. 71 (1), 61–64. doi:10.1016/j.jsurg.2013.06.017

CrossRef Full Text | Google Scholar

Deib, G., Johnson, A., Unberath, M., Yu, K., Andress, S., Qian, L., et al. (2018). Image Guided Percutaneous Spine Procedures Using an Optical See-Through Head Mounted Display: Proof of Concept and Rationale. J. Neurointervent Surg. 10, 1187–1191. doi:10.1136/neurintsurg-2017-013649

CrossRef Full Text | Google Scholar

DeMasi, S. C., Katsuta, E., and Takabe, K. (2016). Live Animals for Preclinical Medical Student Surgical Training. Edorium J. Surg. 3 (2), 24–31. doi:10.5348/S05-2016-16-OA-6

CrossRef Full Text | Google Scholar

Forgione, A., and Guraya, S. Y. (2017). The Cutting-Edge Training Modalities and Educational Platforms for Accredited Surgical Training: a Systematic Review. J. Res. Med. Sci. 22, 51. doi:10.4103/jrms.JRMS_809_16

CrossRef Full Text | Google Scholar

Galati, R., Simone, M., Barile, G., De Luca, R., Cartanese, C., and Grassi, G. (2020). Experimental Setup Employed in the Operating Room Based on Virtual and Mixed Reality: Analysis of Pros and Cons in Open Abdomen Surgery. J. Healthc. Eng. 2020, 1–11. doi:10.1155/2020/8851964

PubMed Abstract | CrossRef Full Text | Google Scholar

Goris, J., Jalink, M. B., and Ten Cate Hoedemaker, H. O. (2014). Training Basic Laparoscopic Skills Using a Custom-Made Video Game. Perspect. Med. Educ. 3 (4), 314–318. doi:10.1007/s40037-013-0106-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Gregory, T. M., Gregory, J., Sledge, J., Allard, R., and Mir, O. (2018). Surgery Guided by Mixed Reality: Presentation of a Proof of Concept. Acta Orthopaedica 89 (5), 480–483. doi:10.1080/17453674.2018.1506974

PubMed Abstract | CrossRef Full Text | Google Scholar

Heinrich, F., Schwenderling, L., Becker, M., Skalej, M., and Hansen, C. (2019). HoloInjection: Augmented Reality Support for CT‐guided Spinal Needle Injections. Healthc. Tech. Lett. 6 (6), 165–171. doi:10.1049/htl.2019.0062

CrossRef Full Text | Google Scholar

Hu, H., Shao, Z., Ye, L., and Jin, H. (2019). Application of Mixed Reality Technology in Surgery. Int. J. Clin. Exp. Med. 12 (4), 3107–3133.

Google Scholar

Hurson, C., Tansey, A., O’Donnchadha, B., Nicholson, P., Rice, J., and McElwain, J. (2007). Rapid Prototyping in the Assessment, Classification and Preoperative Planning of Acetabular Fractures. Injury 38, 1158–1162. doi:10.1016/j.injury.2007.05.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Jacobson, S., Epstein, S. K., Albright, S., Ochieng, J., Griffiths, J., Coppersmith, V., et al. (2009). Creation of Virtual Patients from CT Images of Cadavers to Enhance Integration of Clinical and Basic Science Student Learning in Anatomy. Med. Teach. 31 (8), 749–751. doi:10.1080/01421590903124757

PubMed Abstract | CrossRef Full Text | Google Scholar

Jayender, J., Xavier, B., King, F., Hosny, A., Black, D., Pieper, S., et al. (2018). A Novel Mixed Reality Navigation System for Laparoscopy Surgery. Med. Image Comput. Comput. Assist. Interv. 11703, 72–80. doi:10.1007/978-3-030-00937-3_9

PubMed Abstract | CrossRef Full Text | Google Scholar

Kerr, B., and O'Leary, J. P. (1999). The Training of the Surgeon: Dr. Halsted's Greatest Legacy. Am. Surg. 65 (11), 1101–1102.

PubMed Abstract | Google Scholar

Lahanas, V., Loukas, C., Smailis, N., and Georgiou, E. (2015). A Novel Augmented Reality Simulator for Skills Assessment in Minimal Invasive Surgery. Surg. Endosc. 29 (8), 2224–2234. doi:10.1007/s00464-014-3930-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Langridge, B., Momin, S., Coumbe, B., Woin, E., Griffin, M., and Butler, P. (2018). Systematic Review of the Use of 3-Dimensional Printing in Surgical Teaching and Assessment. J. Surg. Educ. 75 (1), 209–221. doi:10.1016/j.jsurg.2017.06.033

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, S. C., Fuerst, B., Tateno, K., Johnson, A., Fotouhi, J., Osgood, G., et al. (2017). Multi‐modal Imaging, Model‐based Tracking, and Mixed Reality Visualisation for Orthopaedic Surgery. Healthc. Technol. Lett. 4, 168–173. doi:10.1049/htl.2017.0066

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, G., Dong, J., Wang, J., Cao, D., Zhang, X., Cao, Z., et al. (2020). The Clinical Application Value of Mixed‐reality‐assisted Surgical Navigation for Laparoscopic Nephrectomy. Cancer Med. 9 (15), 5480–5489. doi:10.1002/cam4.3189

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, H., Wu, J., Tang, Y., Li, H., Wang, W., Li, C., et al. (2020). Percutaneous Placement of Lumbar Pedicle Screws via Intraoperative CT Image-Based Augmented Reality-Guided Technology. J. Neurosurg. Spinespine 32 (4), 1–6. doi:10.3171/2019.10.SPINE19969

CrossRef Full Text | Google Scholar

Louis, T., Troccaz, J., Rochet-Capellan, A., and Bérard, F. (2019). “Is it Real? Measuring the Effect of Resolution, Latency, Frame Rate and Jitter on the Presence of Virtual Entities,” in Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces (ISS '19), Soul, South Korea, November 2019 (New York, USA: Association for Computing Machinery), 5–16. doi:10.1145/3343055.3359710

CrossRef Full Text | Google Scholar

Maasthi, M. J., Gururaj, H., Janhavi, V., Harshitha, K., and Swathi, B. (2020). “An Interactive Approach Deployed for Rhinoplasty Using Mixed Reality,” in 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7-11 Jan. 2020 (IEEE), 680–682. doi:10.1109/comsnets48256.2020.9027491

CrossRef Full Text | Google Scholar

Perkins, S. L., Krajancich, B., Yang, C.-F. J., Hargreaves, B. A., Daniel, B. L., and Berry, M. F. (2020). A Patient-specific Mixed-Reality Visualization Tool for Thoracic Surgical Planning. Ann. Thorac. Surg. 110 (1), 290–295. doi:10.1016/j.athoracsur.2020.01.060

PubMed Abstract | CrossRef Full Text | Google Scholar

Rocha e Silva, R., Lourenção, A., Goncharov, M., and Jatene, F. B. (2016). Low Cost Simulator for Heart Surgery Training. Braz. J. Cardiovasc. Surg. 31 (6), 449–453. doi:10.5935/1678-9741.20160089

CrossRef Full Text | Google Scholar

Rosenberg, B. H., Landsittel, D., and Averch, T. D. (2005). Can Video Games Be Used to Predict or Improve Laparoscopic Skills. J. Endourology 19 (3), 372–376. doi:10.1089/end.2005.19.372

CrossRef Full Text | Google Scholar

Sadeghi, A. H., Bakhuis, W., Van Schaagen, F., Oei, F. B. S., Bekkers, J. A., Maat, A. P. W. M., et al. (2020). Immersive 3D Virtual Reality Imaging in Planning Minimally Invasive and Complex Adult Cardiac Surgery. Eur. Hear. J. - Digit Heal 1 (1), 62–70. doi:10.1093/ehjdh/ztaa011

CrossRef Full Text | Google Scholar

Sánchez-Margallo, F. M., and Sánchez-Margallo, J. A. (2015). “Computer-Assisted Minimally Invasive Surgery: Image-Guided Interventions and Robotic Surgery,” in Computer-Assisted Surgery. Editor X. Chen (New York, NY: Nova Science Publishers, Inc.), 43–94.

Google Scholar

Sánchez-Margallo, F. M., Sánchez-Margallo, J. A., Cristo, A., Rodríguez, A., and Suárez, M. (2018). Application of Mixed Reality Technology for Surgical Training in Urology. Surg. Endosc. 32, S655. doi:10.1007/s00464-019-06728-8

CrossRef Full Text | Google Scholar

Sánchez-Margallo, F. M., Sánchez-Margallo, J. A., Suárez, M., Cristo, A., Rodríguez, A., and Moyano-Cuevas, J. L. (2018). Tecnologías de control gestual y realidad aumentada para la asistencia en cirugía de mínima invasión. Cirugía Española 96, 1. (Espec Congr).

Google Scholar

Sánchez-Margallo, F. M., Durán Rey, D., Serrano Pascual, Á., Mayol Martínez, J. A., and Sánchez-Margallo, J. A. (2021). Comparative Study of the Influence of Three-Dimensional versus Two-Dimensional Urological Laparoscopy on Surgeons' Surgical Performance and Ergonomics: A Systematic Review and Meta-Analysis. J. Endourology 35 (2), 123–137. doi:10.1089/end.2020.0284

CrossRef Full Text | Google Scholar

Sappenfield, J. W., Smith, W. B., Cooper, L. A., Lizdas, D., Gonsalves, D. B., Gravenstein, N., et al. (2018). Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator. Anesth. Analgesia 127, 83–89. doi:10.1213/ane.0000000000002572

PubMed Abstract | CrossRef Full Text | Google Scholar

Scott, D. J., Cendan, J. C., Pugh, C. M., Minter, R. M., Dunnington, G. L., and Kozar, R. A. (2008). The Changing Face of Surgical Education: Simulation as the New Paradigm. J. Surg. Res. 147 (2), 189–193. doi:10.1016/j.jss.2008.02.014

CrossRef Full Text | Google Scholar

Smith, R. T., Clarke, T. J., Mayer, W., Cunningham, A., Matthews, B., and Zucco, J. E. (2020). Mixed Reality Interaction and Presentation Techniques for Medical Visualisations. Adv. Exp. Med. Biol. 1260, 123–139. doi:10.1007/978-3-030-47483-6_7

PubMed Abstract | CrossRef Full Text | Google Scholar

Turini, G., Condino, S., Parchi, P., Viglialoro, R., Piolanti, N., Gesi, M., Ferrari, M., and Ferrari, V. (2018). “A Microsoft HoloLens Mixed Reality Surgical Simulator for Patient-specific Hip Arthroplasty Training,” in International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Otranto, Italy, 14 July 2018 (New York City, USA: Springer, Cham), 201–210. AVR.

CrossRef Full Text | Google Scholar

Viglialoro, R. M., Condino, S., Turini, G., Carbone, M., Ferrari, V., and Gesi, M. (2021). Augmented Reality, Mixed Reality, and Hybrid Approach in Healthcare Simulation: A Systematic Review. Appl. Sci. 11 (5), 2338. doi:10.3390/app11052338

CrossRef Full Text | Google Scholar

Williams, M. A., McVeigh, J., Handa, A. I., and Lee, R. (2020). Augmented Reality in Surgical Training: a Systematic Review. Postgrad. Med. J. 96 (1139), 537–542. doi:10.1136/postgradmedj-2020-137600

PubMed Abstract | CrossRef Full Text | Google Scholar

Zuckerman, J. D., Wise, S. K., Rogers, G. A., Senior, B. A., Schlosser, R. J., and DelGaudio, J. M. (2009). The Utility of Cadaver Dissection in Endoscopic Sinus Surgery Training Courses. Am. J. Rhinology allergy 23 (2), 218–224. doi:10.2500/ajra.2009.23.3297

CrossRef Full Text | Google Scholar

Keywords: mixed reality, medical training, surgical planning, minimally invasive surgery, laparoscopy

Citation: Sánchez-Margallo JA, Plaza de Miguel C, Fernández Anzules RA and Sánchez-Margallo FM (2021) Application of Mixed Reality in Medical Training and Surgical Planning Focused on Minimally Invasive Surgery. Front. Virtual Real. 2:692641. doi: 10.3389/frvir.2021.692641

Received: 08 April 2021; Accepted: 15 October 2021;
Published: 28 October 2021.

Edited by:

Marientina Gotsis, University of Southern California, United States

Reviewed by:

Juan Manuel Jacinto-Villegas, National Council of Science and Technology (CONACYT), Mexico
Nancy Rodriguez, UMR5506 Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), France

Copyright © 2021 Sánchez-Margallo, Plaza de Miguel, Fernández Anzules and Sánchez-Margallo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Juan A. Sánchez-Margallo, jasanchez@ccmijesususon.com

Download