REVIEW article

Front. Surg., 16 July 2024

Sec. Neurosurgery

Volume 11 - 2024 | https://doi.org/10.3389/fsurg.2024.1427844

Narrative review of patient-specific 3D visualization and reality technologies in skull base neurosurgery: enhancements in surgical training, planning, and navigation

  • 1. Department of Neurosurgery, Faculty of Medicine, Hacettepe University, Ankara, Türkiye

  • 2. Neurosurgery Clinic, Polatli Duatepe State Hospital, Ankara, Türkiye

  • 3. Btech Innovation, METU Technopark, Ankara, Türkiye

Article metrics

View details

35

Citations

5,9k

Views

1,6k

Downloads

Abstract

Recent advances in medical imaging, computer vision, 3-dimensional (3D) modeling, and artificial intelligence (AI) integrated technologies paved the way for generating patient-specific, realistic 3D visualization of pathological anatomy in neurosurgical conditions. Immersive surgical simulations through augmented reality (AR), virtual reality (VR), mixed reality (MxR), extended reality (XR), and 3D printing applications further increased their utilization in current surgical practice and training. This narrative review investigates state-of-the-art studies, the limitations of these technologies, and future directions for them in the field of skull base surgery. We begin with a methodology summary to create accurate 3D models customized for each patient by combining several imaging modalities. Then, we explore how these models are employed in surgical planning simulations and real-time navigation systems in surgical procedures involving the anterior, middle, and posterior cranial skull bases, including endoscopic and open microsurgical operations. We also evaluate their influence on surgical decision-making, performance, and education. Accumulating evidence demonstrates that these technologies can enhance the visibility of the neuroanatomical structures situated at the cranial base and assist surgeons in preoperative planning and intraoperative navigation, thus showing great potential to improve surgical results and reduce complications. Maximum effectiveness can be achieved in approach selection, patient positioning, craniotomy placement, anti-target avoidance, and comprehension of spatial interrelationships of neurovascular structures. Finally, we present the obstacles and possible future paths for the broader implementation of these groundbreaking methods in neurosurgery, highlighting the importance of ongoing technological advancements and interdisciplinary collaboration to improve the accuracy and usefulness of 3D visualization and reality technologies in skull base surgeries.

Introduction

The skull base represents one of the most complex areas in human anatomy, comprising important neurovascular structures within an intricate space (1). The skull base is anatomically divided into anterior, middle, and posterior regions. While some neurosurgical pathologies stay in one, many extend beyond the borders of a particular cranial fossa. For instance, in the skull's center, sellar/parasellar tumors can extend to all three cranial fossae. Endoscopic and open (transcranial) approaches can be utilized separately or in combination to tackle these complex pathologies. Understanding its structure and performing neurosurgical operations requires experience that requires visuospatial orientation with accuracy (2, 3). Skull base surgery's inherent challenges, characterized by complex neuroanatomy, proximity of critical structures, and limited surgical access, underscore the need for highly advanced technologies that enhance the surgeon's capabilities beyond conventional limits (4, 5).

To solve these difficulties, personalized three-dimensional (3D) visualization and reality technologies have become essential in planning surgeries and guiding neurosurgeons during operations (68). The synthesis of these 3D technologies entails the utilization of sophisticated software to handle radiography data, hence facilitating the creation of anatomically precise 3D digital and printed models (9, 10). These tangible models serve as a physical navigational map, allowing surgeons to visualize and strategize operations with comprehensive and unprecedented clarity (1113) A comprehensive knowledge of the specific examples of skull base disorders is required to create these models, encompassing the entire process from 2D image acquisition to the final 3D anatomic and/or pathologic visualization (1416).

The advancement of imaging and modeling approaches, including augmented reality (AR), virtual reality (VR), extended reality (XR), mixed reality (MxR), and 3D printing, has shown inventive solutions that cater to the distinct requirements of each area (1719) AR overlays digital information, such as MR images, onto the actual scenario, enhancing the surgeon's view during procedures (20). VR immerses users in a virtual environment, allowing detailed preoperative planning and surgical rehearsals (21). MxR merges real and virtual worlds, enabling interaction with physical and digital objects in real-time, which is crucial for navigating complex skull base surgeries (22). XR encompasses AR, VR, and mixed reality, offering diverse immersive experiences (23). These technologies improve the surgeon's ability to see, allowing them to perceive and interact with virtual preoperative planning, as demonstrated by studies (2426) These tools can offer real-time, intraoperative navigation during the procedure, enhancing the surgeon's understanding, potentially decreasing surgical complications, and enhancing patient outcomes (27, 28).

Our study is a narrative review of patient-specific 3D visualization and reality technologies in skull base surgery. We delicately evaluated and synthesized the existing literature on this topic. We investigated the role of these technologies in the surgical treatment of skull base diseases. This study aimed to enhance the profession by synthesizing previous studies and providing guidance to specialists on the intricate anatomy of the skull base. Another objective was to assess the progress of endoscopic and open surgery techniques using AR, VR, XR, MxR, and 3D printing advancements in each specific area of the skull base. We also evaluated how these innovations impact the surgeon's approach, decision-making, and surgical performance. Our overall aim was to create a thorough resource that explains the pros and cons of these technologies and how they can be incorporated into future innovations related to the skull base.

Generating 3D digital and printing models for neurosurgical planning and navigation

Radiological data acquisition and segmentation

For building 3D visualization models for neurosurgical planning, the acquisition and segmentation of radiological 2D images play a pivotal role. Magnetic Resonance Imaging (MRI) is generally preferred for its superior soft tissue contrast, which facilitates the delineation of neural tissues. At the same time, Computerized Tomography (CT) scans provide details of bony structures (29, 30). Detailed T1-weighted MRIs catch the skin's topography, creating a surface map for external anatomical points. CT angiography defines the skull and vascular network, providing structural clarity and critical detail on bone and vessels. White and grey matter borders are extracted from T1W and T2W MRI sequences, allowing for the differentiation of cerebral tissue layers. Veins and sinuses are delineated with contrast-enhanced MR venography, providing insight into the venous system. At the same time, arterial structures are highlighted using time-of-flight (TOF) MR angiography, focusing on blood flow dynamics. Pathological cases such as tumors and associated edematous changes are identified through contrast-enhanced MRI sequences, including T1W, T2W, and Fluid attenuated inversion recovery (FLAIR). After alignment and calibration, this multimodal imaging synthesis produces a 3D visualization reflecting the true anatomical complexity essential for skull base operations (8, 31).

Advanced software tools (e.g., Materialise Mimics) are then used to segment anatomic structures on radiological images. This stage is pivotal as it differentiates between different anatomical features by applying thresholds that recognize variations in tissue density and radiological characteristics using the abovementioned modalities. Segmenting anatomical datasets requires both automated algorithms and manual refinement to ensure precision, which might involve expert radiologists and neurosurgeons for verification (32, 33).

3D reconstruction and model refinement

3D reconstruction follows segmentation, typically through surface or volume rendering techniques. These methods convert the segmented 2D slices into digital 3D models, which then undergo refinement. Available software tools (e.g., Materialise 3-matic) allow for smoothing, optimizing mesh structures, and making anatomical adjustments to ensure the digital model's alignment with the original anatomy (31, 34, 35).

Integration into AR/VR/MxR/XR environments

For AR/VR applications, the 3D models are imported into software environments. Systems like Unity or Unreal Engine can create interactive virtual surgical anatomy, pathology, and surgical operations. The integration usually includes programming for interactions with the model, such as simulating surgical interventions and AR display during the surgeries (36, 37). For AR and MxR, the models are processed through platforms like Microsoft's HoloLens®, facilitating their overlay onto real situations or into an MxR environment. This step is crucial for surgical planning, rehearsal, education, or intraoperative guidance, especially in skull base surgery (3840).

3D printing protocols

For tangible models, the 3D files are prepared for 3D printing via slicing software, translating the model into a series of cross-sectional layers. Parameters such as layer thickness, orientation, and support structures are optimized based on the selected 3D printing technology, such as Stereolithography (SLA), Fused Deposition Modeling (FDM), or Selective Laser Sintering (SLS) (41, 42). Post-processing steps such as removing supports, surface smoothing, and sterilization are essential, especially for models used for surgical purposes (11, 43).

Quality assurance and clinical validation

Collaboration between clinicians, radiologists, and biomedical engineers is required to validate the production of clinically relevant and accurate models. Verification of the models is a very important phase in which the 3D structures are compared with the original imaging to verify accuracy (44, 45). This can be done using analytical software tools that quantify deviations between the models and the radiological source. Clinical validation may also involve using the models in surgical settings or via comparison with intraoperative findings to assess their practical utility and reliability by the surgeons during operations (31).

Anterior skull base surgery

The anterior skull base is situated in the region that lies between the cranial and facial compartments. The object can be divided into three clearly defined regions: the midline and two lateral side regions. The midline segment comprises the cribriform plate, posterior frontal plate, ethmoidal roof, planum sphenoidale, and tuberculum sellae. The roof of the nasal cavity serves as a crucial barrier between the sinonasal tract and the intracranial space. The boundaries between the intracranial compartment and the orbital contents are primarily defined by the lateral segments, which consist principally of the orbital plates of the frontal bones and the smaller wings of the sphenoid. In addition, the midline limbus sphenoidale also contributes to this delineation (46, 47).

This design not only maintains the skull's structural strength but also exposes potential weaknesses for different diseases that may penetrate the front part of the cranial cavity. This region can exhibit benign tumors, such as meningiomas, and malignancies, such as squamous cell carcinomas and esthesioneuroblastomas (48). These disorders frequently exploit the skull base's inherent pathways and delicate barriers, resulting in potential difficulties such as cerebrospinal fluid leaks and the spread of the ailment within the skull (49, 50).

Endoscopic approaches

The endoscopic endonasal approach (EEA) for anterior skull base surgery has been significantly advanced by 3D modeling (Figure 1) and AR/MxR/XR (7, 51). These technologies have been transformative in treating pathologies such as complex meningiomas, chordomas, chondrosarcomas, and sinonasal malignancies that extend into the anterior skull base (5, 52). Developing 3D models from patient-specific imaging data has further transformed surgical planning, and there has been increasing interest in neurosurgery, as summarized in Table 1. 3D models have provided surgeons with a malleable visualization of the target pathology close to the intricate intracranial structures (70). These reconstructions act as a real-time navigational guide, markedly reducing the risk to critical neurovascular structures (75, 76). Moreover, incorporating AR technology into the operational field provides surgeons with a virtual overlay that enhances their 3D orientation, enabling precise tumor excision while maintaining antitarget avoidance (77). Consequently, using these technologies has demonstrated measurable improvements in surgical outcomes. For instance, the adoption of AR and 3D visualization has been associated with a reduction in cerebrospinal fluid (CSF) leak rates from 40% to as low as 2.9% in more recent series, a decrease in cranial nerve (CN) dysfunction rates, and a reduction in internal carotid artery (ICA) injury rates from 0.9% to 0.3%, thereby highlighting the potential of these technologies to significantly enhance surgical precision and patient safety (5979).

Figure 1

Table 1

AuthorArticle typeTechniqueHardwareSoftwareNeurosurgical approachKey findings
Jean (53)Case SeriesMixed reality in cranial surgeryPreoperative CT and MRI, Microscope-integrated ARSurgical Theater SRP and SyncAR, StealthStation S8Various skull base approachesMxR facilitates surgical planning and execution,with a learning curve but no surgery or hospitalization extension.
Gómez Amarillo et al. (19)Mini-ReviewAR for intracranial meningioma resectionMicroscopes with integrated HUDs, HMDsAR platforms, 3D reconstruction softwareMeningioma resection, especially skull baseAR enhances surgery by improving visualization of critical structures and tumor boundaries.
Yamaoka et al. (28)Research Article3D printingMulti-detector row computed tomography data, 3D printerN/AEpidural procedures, skull base drilling, dural peeling techniquesThe 3D model of the anterior and middle cranial fossa is an effective tool for teaching anatomical knowledge and essential skull base surgery skills, including dural dissection and 3D positioning and presurgical planning of structures.
Salgado-Lopez et al. (54)Video ArticleIntraoperative heads-up displayEndoscope, doppler, heads-up display equipmentVirtual reality integration software smartbrushAnterior skull base surgery via pterional craniotomyImplementation of virtual reality and heads-up display in skull base surgery for enhanced visualization and navigation, ensuring artery preservation and tumor resection.
Jean et al. (39)2D Operative VideoAR/VRNavigation-tracked microscopeAugmented reality (AR) template, Virtual reality (VR) renderingResection of clinoid meningiomaAR and VR enhance surgical planning and execution, ensuring precise and minimally invasive approaches.
Zawy Alsofy et al. (6)Research ArticleVRCT and MRI scans3D slicer, VR softwareAnterior skull base meningioma resection3D-VR significantly influences the detection of tumor-related anatomical structures, recommended head positioning, and surgical approach.
Zeiger, et al. (55)Retrospective StudyUse of 3D digital reconstructions and mixed reality for intraoperative navigation for EEASurgical Theater's 3D reconstructions, Brainlab's Cranial Navigation systemDICOM-based 3D reconstruction software from Surgical Theater, Brainlab's optical tracking softwareEndoscopic endonasal skull base surgeryMixed reality technology improved spatial awareness and operative efficiency in skull base surgery.
Lai, et al. (56)Original ArticleAugmented reality (AR) with fusion of intraoperative CBCT on endoscopic viewEndoscope with camera, C-arm for CBCT, OTS with video camerasARSN system, optical trackingEndoscopic endonasal skull base surgeryNovel AR technique integrating endoscopic and intraoperative CBCT imaging showed sub-millimeter accuracy, potentially increasing safety and efficiency in endoscopic skull base surgery.
Citardi et al. (57)Literature ReviewAugmented reality, microsensorsCompact navigation systems, EM trackingPreoperative Planning Software, AR-Enhanced NavigationEndoscopic sinus and anterior skull base surgeryAdvanced surgical navigation aims for TRE of 1.0 to 1.5 mm. The incorporation of AR technology and microsensors may significantly enhance precision and safety in sinus and skull base surgery.
Li et al. (58)Feasibility StudyAR navigation systemEndoscopy imaging system, infrared tracking system, workstationCustom open-source softwareEndoscopic sinus and skull base surgeryThe AR navigation system provides enhanced visual guidance, reducing operation time and mental workload, especially beneficial for less experienced surgeons.
Porras et al. (59)Systematic ReviewEndoscopic endonasal approach (EEA)Endoscopy and intraoperative navigation systems.Image-guided surgery software, electronic medical records for data analysis, and possibly software for modeling and simulations in surgical planning.EEA to the skull base, addressing pathologies in anterior, middle, and posterior cranial fossae.The review highlights the advantages of EEA, including a direct trajectory to ventral skull base lesions, avoidance of brain retraction, and improved visualization. The authors stress the importance of understanding and preventing complications such as CSF leaks, cranial nerve dysfunction, pituitary gland dysfunction, ICA injury, infection, and other potential issues.
Lai et al. (60)Research ArticleHigh-fidelity virtual reality simulationMicrocomputed tomography scansCardinalSim softwareMiddle cranial skull base approaches3D simulations illustrate neurovascular relationships and interactive drilling in the middle cranial fossa for training, education, and surgical planning purposes.
Thavarajasingam et al. (52)Systematic ReviewAR in transsphenoidal surgeryVarious, including optical tracking systems and hybrid endoscopic-AR displaysVarious, including ITK-SNAP 2.0, Scopis, Brainlab, etc.Transsphenoidal endoscopic endonasal surgery (ETS) and microscopic transsphenoidal surgery (MTS)AR enhances landmark identification and intraoperative navigation, positively impacting surgeon experience and potentially improving accuracy and surgical time. However, the impact on patient outcomes is unclear.
Jean et al. (38)Case ReportAugmented realityAR navigation-tracked microscopeVirtual reality rendering softwareAnterior petrosectomyAR provides critical visual cues during AP, enhancing safety and protecting neurovascular structures.
Guo et al. (9)Retrospective Study3D-printed models for surgeryCT, MRI, and CT angiographyMimics softwareSkull base meningioma surgery3D-printed models significantly aid in surgical planning, anatomical understanding, and patient education for skull base meningiomas.
Jean and Singh (61)2D Operative VideoVREndoscope,VRVirtual reality (VR) renderingEndoscopic endonasal approach for tuberculum sellae meningiomaPreoperative planning and surgical rehearsal in VR can improve the efficiency of endoscopic skull base surgery.
Lee and Wong (18)Literature ReviewAR/VRDextroscope VR System, ImmersiveTouch, NeuroTouch SimulatorNeuroPlanner, NeuroBase, Dextroscope VIVIANManagement of intracranial tumors, specifically focusing on skull base tumor surgeriesVR and AR technologies are instrumental in surgical planning, providing 3D visualization of anatomical structures and facilitating the precise excision of skull base tumors.
Carl, et al. (62)Original ArticleMicroscope-based AR for visualization of the target and risk structures in transsphenoidal surgeryOperating microscopes with integrated head-up displays, intraoperative computed tomography (iCT)AR visualization, automatic registration using iCTTranssphenoidal surgeryMicroscope-based AR significantly increased accuracy and safety in complex transsphenoidal procedures. Automatic iCT-based registration provided high precision, suggesting it is a reliable tool for enhancing patient safety.
Barber et al. (11)Case StudyAR, surgical navigation, and 3D printingCT imaging, Form2 3D printer, Stealth3D workstationITK-SNAP, Unity with Vuforia, Android OSTranscanal endoscopic approach to the petrous apexAR and 3D-printed patient-specific models coupled with navigation for preoperative planning provided a realistic simulation for complex lateral skull base surgery, which closely mirrored intraoperative findings and may improve surgical outcomes.
Sato et al. (27)Research Article3D-MFIHigh-Resolution MRI—Computed Tomography (CT)—Digital Subtraction Angiography (DSA)Avizo software (version 6.0, Visage Imaging, CA)Surgical simulation for resection of deep-seated meningiomas3D-MFI is effective for surgical planning and education, with precise identification of skull base structures and vessels.
McJunkin, et al. (63)Research ArticleMixed reality platform developmentMR head mounted display (HMD), Microsoft HoloLensUnity® gaming platform, Visual Studio, ITK-SNAP, MeshLabLateral skull base approachThe MR platform effectively visualized temporal bone structures. It improved spatial understanding of anatomy, potentially enhancing surgical navigation and training.
Randazzo et al. (14)Systematic Review3D printing for surgical planningCT, MRIITK-SNAP, Unity, VuforiaTranscanal endoscopic approach to the petrous apex3D printing and AR facilitate virtual surgical planning and simulation, potentially improving patient safety and surgical outcomes in skull base neurosurgery.
Kawamata et al. (64)Technical NoteEndoscopic augmented reality navigationRigid endoscope with LEDs, optical tracking systemN/AEndonasal transsphenoidal surgeryAR system improves tumor and anatomic structure visualization during surgery, enhancing safety and accuracy for pituitary tumor treatment.
Olexa, et al. (65)Case ReportAugmented reality (AR) visualization and planningMicrosoft Hololens 2 Head-Mounted DisplayCustom application developed by hoth intelligence (Philadelphia, Pennsylvania)Retrosigmoid craniotomy for vestibular schwannomaAR technology streamlined surgical planning for vestibular schwannoma, allowing for accurate 3D mapping and visualization of the patient's head, aiding retrosigmoid craniotomy approach decisions.
Hong et al. (66)Research StudyAugmented reality neuronavigationMobile-based systemSelf-developed mobile augmented reality navigation system (MARNS)Retrosigmoid craniotomyMARNS accurately located the transverse-sigmoid sinus junction with a mean error of 2.88 mm and an average positioning time of 279.71 s. It maintained bone flap integrity in all cases, providing a convenient, cost-effective, and reliable method for neurosurgical navigation.
Schwam et al. (67)Preliminary ReportAugmented reality in posterolateral skull base surgeryBrainLab Curve™, Surgical Theater, Zeiss OPMI® PENTERO® 900 MicroscopeN/APosterolateral skull base surgery, specifically for cerebellopontine angle tumor resectionAR showed utility in the preparatory and approach phases of posterolateral skull base surgery, particularly in simulating tumor resection, planning incisions, and craniotomies.
Martín-Noguerol et al. (17)Literature ReviewHybrid CT & MRI 3D printingMultimodality 3D printersRegistration and segmentation softwareSkull base, CNS, spinal surgery planningHybrid models enhance pre-operative planning and surgical precision, especially for complex skull base neurosurgeries.
Lin et al. (68)Original Article3D Printing of Cranial NervesConnex3 Objet350 3D printerMimics Research v17.0, 3-matic v9.0Transnasal, frontotemporal, and retrosigmoid approaches3D printed models enhanced the visualization of skull base structures and cranial nerves, facilitated surgical simulation and planning, and improved the accuracy of cranial nerve reconstruction during surgery.
Mascitelli, et al. (69)Clinical StudyHUD/augmented realityBrain Lab curve™, Zeiss Pentero 900Brainlab navigation, surgical theaterMultimodal, intracranial surgery for skull base and vascular casesHUD is effective for a range of vascular and oncologic intracranial pathologies in skull base surgery, aiding in various stages from skin incision to arachnoid dissection. Excellent or good accuracy was achieved in most cases; deep lesions had less accuracy. No HUD-related complications were reported.
Pacione et al. (10)Technical NoteMultimaterial 3D printingDual-energy CT, somatom force scanner, objet260 dental selection 3D printerSyngo.via, Intellispace Portal, STL file formatComplex deformity of skull base and craniovertebral junction surgery3D-printed models enhanced surgical planning for complex deformities and were instrumental in choosing the most effective approach and correction strategy.
Baskaran et al. (16)Literature Review3D printing and rapid prototypingVarious 3D printers using powder-based materials such as polymers, ceramics, plastics, resins, super alloys, stainless steel, titaniumDICOM images and converting them to STL file format such as Freesurfer and InVesalius, as well as 3D modeling software like BlenderAnatomical modeling, skull base training3D printing is precise for neurosurgical planning, especially for complex areas like the skull base. It enhances anatomical education and pre-operative simulation.
Dixon et al. (70)Research ArticleLIVE-IGS, critical structure proximity alerts, 3D virtual endoscopyOptical IGS reflective markers, endoscope, drill, CBCT systemITK-SNAP 2.0, custom navigation softwareEndoscopic transclival approachesThe LIVE-IGS system reduced mental demand, effort, and frustration compared to conventional IGS. It provided accurate, intuitive, and dynamic feedback, which could improve spatial awareness and reduce task workload during surgery.
Cabrilo et al. (71)Technical NoteAugmented reality-assisted neuronavigationOperating microscope, neuronavigation workstationBrainLAB's Iplan platformSkull base surgery for clivus chordomaAugmented reality-based neuronavigation overlays 3D neuronavigational data onto the operating field, improving navigation throughout skull base procedures and offering a more intuitive form of image-guided surgery without additional hardware.
Oishi et al. (72)Research Article3D imaging and modeling, interactive virtual simulation (IVS)3D printer for plaster models, haptic deviceCAD software with 3D CapabilitiesSkull base and deep tumor surgery3D imaging and modeling enhanced understanding of complex anatomical relationships, provided realistic simulation for surgical planning and training, and allowed for the determination of optimal surgical strategies for skull base and deep tumors.
Oishi et al. (73)Research Article3D multifusion volumetric imaging (3D MFVI), including volume rendering and image fusion64-channel multislice CT scanner, 1.5-T MRI imaging system, DSA systemImage-analysis software (Real Intage; KGT, Inc)Various approaches based on tumor location and type (e.g., retrosigmoid approach with IAC exposure, extended transsphenoidal, etc.)3D MFVI techniques enabled adequate visualization of the microsurgical anatomy, facilitated presurgical simulation, and allowed surgeons to determine an appropriate and feasible surgical approach.
Rosahl et al. (74)Original ArticleVR augmentationInfrared-based image-guidance systemImage guidance laboratories softwareVarious skull base proceduresVR augmentation can enhance skull base surgery by providing a déjà vu experience of the surgical field, eliminating the need for mental reconstruction of 2D images and potentially improving surgical outcomes.

Advancing skull base neurosurgery: A review of 3D innovations in surgical planning and guidance.

In a video article, the authors studied the “Expanded Endoscopic Endonasal Transtuberculum Approach” for resecting a tuberculum sellae meningioma, utilizing the Surgical Theater SRP7.4.0 (Cleveland, Ohio) for VR preoperative planning and surgical rehearsal. This approach was applied to a 57-year-old female patient presenting with sudden right abducens palsy. The VR simulation demonstrated the absence of anterior cerebral artery enclosure, the tumor's non-extension beyond the ICA laterally, and the adequacy of the surgical corridor for the endonasal approach. Despite lacking haptic feedback, the VR rehearsal significantly contributed to the procedural planning by allowing a 360-degree, multicolored, 3D visualization of the tumor. This method underscored the potential of VR in enhancing surgical precision and efficiency, particularly for surgeons in the early stages of their endoscopic career (61).

In recent studies into the EEA in skull base surgery, researchers have employed a novel platform by Surgical Theater® (55, 61). This platform amalgamates high-definition preoperative imaging data to construct detailed 3D patient anatomy models. This virtual model is then integrated into intraoperative navigation systems, which employ optical tracking to align the virtual and physical surgical fields. The synergy of Surgical Theater's 3D reconstructions with navigation technology provides an MxR view that assists surgeons in real-time during surgery. In a recent study, the authors explored MxR visualization in EEA, focusing on a novel technology that combines 3D reconstructions of patient anatomy with intraoperative navigation. Analyzing 134 retrospective cases, MxR facilitated the surgical approach by identifying critical anatomical structures such as the internal carotid arteries (ICA) and optic nerves, improving the safety and efficiency of the procedures. This initial experience suggests that MxR visualization is valuable in complex skull base surgeries (55).

The integration of AR using operating microscopes equipped with head-up displays has been transformative. Surgeons can perform automatic registration by utilizing intraoperative computed tomography (iCT), thus significantly enhancing navigational accuracy. This precision is especially beneficial in transsphenoidal surgeries for anterior skull base tumors, where AR aids in the differentiation of tumor margins and in avoiding critical neurovascular structures like ICA and cranial nerves. The resulting decrease in target registration error (TRE) has been influential in reducing the risk of vascular injury and ensuring near or gross total tumor resection (54, 69).

In a study published in 2019, the authors studied the application of AR in transsphenoidal surgery through microscope-based head-up displays. The study, encompassing 288 transsphenoidal procedures by a single surgeon, integrated AR for 47 patients (16.3%), highlighting its incorporation into the surgical field. The AR's accuracy is related to navigation and microscope calibration, with fiducial-based registration yielding a TRE of 2.33 ± 1.30 mm. Automatic registration using iCT highly improved AR accuracy to a TRE of 0.83 ± 0.44 mm (P < 0.001). Additionally, using low-dose iCT protocols minimized radiation exposure to the level of a single chest radiograph (0.041 mSv). This advancement in AR technology provides increased patient safety in complex procedures by significantly enhancing the accuracy of intraoperative navigation, reducing radiation exposure, and facilitating better orientation for the surgeon. Notably, no vascular injuries or new neurological deficits were reported in 47 AR-assisted transsphenoidal procedures, indicating that AR enhances surgical orientation and comfort, thus contributing to patient safety. The significant reduction in TRE ensures accurate alignment of the AR overlay with the patient's anatomy, minimizing the risk of surgical errors. The low-dose iCT protocols contribute to patient safety by substantially reducing radiation exposure. These findings emphasize the potential of AR in endoscopic skull base surgery, making procedures safer and more efficient (62).

Another study evaluated the incorporation of AR imaging with the endoscopic view in endonasal skull base surgery. This surgical navigation technique demonstrated sub-millimeter accuracy, employing augmented reality surgical navigation systems (ARSN) with 3D cone beam computed tomography (CBCT). The study verified the accuracy of CBCT image co-registration on the endoscopic view, with a mean TRE of 0.55 mm and a standard deviation of 0.24 mm. This approach ensures precise surgical navigation and offers real-time endoscopic and diagnostic imaging (56).

AR/MxR/XR and microsensors offer surgeons unparalleled visualization and instrument-tracking capabilities. Integrating AR with surgical navigation systems merges preoperative imaging data with real-time surgical views, providing a more intuitive surgical experience and potentially reducing complications. Microsensors, especially in electromagnetic tracking systems, enable the placement of instruments and targeted therapies, even in the challenging anatomical landscapes of the sinus and skull base. These advancements signify achieving surgical navigation with sub-millimeter accuracy and increase the possibilities of minimally invasive surgical techniques. Additionally, microsensors have been shown to offer advantages over conventional neuronavigation and stereotaxis. For example, Citardi et al. reported that while traditional systems often achieve a TRE of 1.5–2.0 mm, including microsensors can reduce this to below 1 mm. In practical terms, these improvements in accuracy can significantly impact surgical outcomes. For instance, the enhanced precision afforded by microsensors has been associated with reduced intraoperative blood loss and fewer major and minor complications. Studies have demonstrated that surgeries utilizing these navigation systems experience a 25% rate of intraoperative adjustments based on real-time feedback. This has led to better surgical outcomes and reduced the need for revisions. These findings support the enhanced safety and efficacy of using microsensors in surgical navigation, justifying their adoption for surgeries (57).

In another study, the authors integrated AR with surgical navigation in the cadaver study, demonstrating its potential for endoscopic sinus surgery. The study utilized the Scopis Hybrid Navigation system to overlay preoperative CT images onto real-time surgical views, achieving accuracy better than 1.5 mm in aligning AR images. This precision is vital for navigating complex sinus structures and antitarget avoidance. The findings indicate that AR can significantly aid in the precise placement of instruments along the frontal sinus pathway, suggesting that surgical navigation with AR could reduce the risk of complications and enhance surgical safety (80).

Furthermore, a comprehensive review assessed the current state of surgical navigation technologies. The review highlights that while existing systems generally achieve a TRE of 1.5–2.0 mm, there is a considerable need to improve this precision to 1.0–1.5 mm or, ideally, to 0.5–1.0 mm. Reducing the TRE would improve surgical navigation, directly impacting the effectiveness of sinus and skull base surgeries. The authors call for innovations that could further enhance the accuracy of surgical navigation systems (57).

Transcranial approaches

AR/VR/MxR/XR creates patient-specific cranial models for preoperative planning of interventions such as pterional, bicoronal, supraorbital, and subfrontal craniotomies. These models enhance surgical execution by optimizing the surgical positioning, approach selection, and craniotomy placement in transcranial approaches (Table 1).

3D modeling and AR/MxR/XR technologies may minimize morbidity, particularly in anterior skull base meningiomas. These tumors are close to critical structures like the ICA, anterior cerebral arteries, optic and other cranial nerves, and pituitary gland, making accurate resection imperative. In managing anterior cranial base tumors extending into critical neuroanatomical corridors, 3D modeling stands at the forefront of surgical innovation. These technologies are now highly utilized in cases involving neoplasms such as tuberculum sella, planum sphenoidale and olfactory groove meningiomas, and chondrosarcomas, where the risk of iatrogenic damage to the cranial nerves and adjacent structures is significant. The 3D reconstructions enable surgeons to delineate tumor margins with greater accuracy, thereby enhancing the ability to preserve neurological function while ensuring near or gross total resection. Each application of these technologies substantially improves patient outcomes in skull base neurosurgery. Moreover, the VR platform's pro's interactive features are highly beneficial in transcranial routes. They serve as a comprehensive view of the surgical field that overcomes the limitations of traditional 2D imaging. This technological leap marks a milestone in the evolution of neurosurgical protocols and could set a new standard for the surgical management of intracranial pathologies (81).

In a video article, the authors used AR to perform a mini-pterional craniotomy and extradural clinoidectomy on a 69-year-old patient with clinoid meningioma. This approach leveraged a 3D VR model for surgical planning, which was then projected into the navigation-tracked microscope's eyepiece during surgery, enabling real-time AR guidance. This technique facilitated the surgical performance, allowing for an optimal surgical opening with a TRE better than 1.5 mm. This novel use of AR in surgery underscores more accurate and safer surgical interventions and pre-validated surgical plans directly onto the patient's anatomy during surgery (82).

A recent study investigated the utility of 3D VR in preoperative planning for patients with anterior skull base meningiomas. A retrospective analysis of 30 patients revealed that VR-based reconstructions can significantly improve the detection of tumor-related anatomical structures (85% accuracy with VR vs. 74% with conventional imaging, p = 0.002). The VR modality may alter neurosurgeons’ decisions regarding head positioning during surgery (37% lateral rotation recommended with VR compared to 27% with standard imaging, p = 0.009) and influence the choice of surgical approach (36% preferring pterional or extended pterional approaches with VR guidance, p = 0.03). The angles and approaches recommended with VR were determined to provide better exposure of critical anatomical skull base structures, thereby facilitating safer and more effective tumor resection. This angle allowed surgeons to optimize their view and access to the tumor, reducing the need for excessive brain retraction and minimizing potential damage to surrounding tissues. Although similar adjustments can be made with traditional methods, the VR system's ability to simulate these angles preoperatively gives surgeons a clearer understanding of the optimal positioning, which could provide more precise surgical interventions (6). However, it is essential to highlight that these findings were based on retrospective data and surgeon-based evaluations. The clinical outcomes, such as postoperative recovery and complication rates, were not directly assessed. Future studies should focus on objective outcome measures to confirm the clinical advantages of VR.

One of the initial studies validates the utility of three-dimensional multi-fusion volumetric imaging (3D MFVI) in preoperative simulations for skull base tumors. It integrates CT, MRI, and DSA to form 3D reconstructions that vividly depict the tumor's spatial relationships. This approach not only aids in planning the open neurosurgical pathway but also anticipates and navigates potential intraoperative challenges (83).

Central to their methodology is employing image-analyzing software that combines data from various imaging modalities through volume rendering and image fusion, facilitating a detailed visualization of the microsurgical anatomy. Such visualization supports surgeons in evaluating different surgical approaches, thereby enhancing the safety and efficacy of tumor resection strategies (72). The fundamental study demonstrated that 3D MFVI can predict tumor resectability and identify critical anatomical markers that influence the choice of surgical approach, whether via anterior, middle, or posterior skull base routes. The study evaluated 21 skull base tumors.

(SBTs) in 20 patients, including acoustic neurinomas (8), jugular neurinomas (3), meningiomas (4, with one being olfactory groove meningioma), chordomas (3), and others like facial and hypoglossal neurinomas and a dermoid cyst. The results supported the effectiveness of 3D MFVI in surgical planning. The study concludes that 3D MFVI is valuable in visualizing microsurgical anatomy, can improve surgical approach and precision for SBTs, and can be an educational tool for training (73).

Middle skull base surgery

The anterior boundary of the middle skull base and fossa is formed by the larger wings of the sphenoid bone, while the posterior limit is made up of the clivus. Horizontally, it intersects with the squamous part of the temporal bone and the anteroinferior part of the parietal bone. The middle fossa contains anatomical structures, including the sella turcica, which harbors the pituitary gland. The sella turcica is situated between the anterior and posterior clinoid processes. Additionally, there are important foramina, such as the foramen rotundum, ovale, and spinosum, which allow the passage of vital cranial nerves and vessels. The petrous section of the temporal bone serves as the posterior and medial boundary, enclosing the trigeminal ganglion in Meckel's cave, a crucial region above the foramen lacerum. This region's pathologies include complex tumors such as sphenoid wing meningioma (Figure 2), cavernous sinus pathologies, and sellar/parasellar pathologies, and studies have shown that 3D modelization and AR/VR technologies can be highly helpful before and during the surgeries as demonstrated in Table 1. The middle skull base's structural complexity is important for supporting the brain and presents specific complications during surgical procedures because of its proximity to vital neurovascular structures. The neurovascular complexities of the middle cranial fossa are segmented to create a realistic surgical environment. These segmented 3D models render bones with variable transparency, thus unveiling surgical anatomy layers. Such innovations facilitate surgical dissections and enhance not only neurosurgical planning and execution but also education and training (60).

Figure 2

Endoscopic approaches

AR technologies have addressed complex pathologies such as parasellar meningiomas, pituitary adenomas, and craniopharyngiomas. By integrating 3D colored objects into the real-time surgical view, surgeons can navigate with a depth of the middle cranial skull base through the endoscopic pathways (64). These models are crucial before and during the surgeries when the surgical approach requires manipulating the bony structures and neurovascular components that are densely concentrated in this region. The real-time AR visualizations enhance the surgeon's perspective and aid in preoperative planning. This allows for a more strategic approach in real time to excising lesions while minimizing operative morbidity (62).

The integration of AR with endoscopic neuronavigation systems, such as the Scopis Navigation System, has started to be used in pediatric neurosurgery, particularly for complex middle skull base pathologies such as craniopharyngiomas, Rathke cleft cysts, and pituitary adenomas. This technology enables the superimposition of preoperative 2D imaging scans directly into the field of surgery during endoscopic procedures (84). AR-assisted surgery helps surgeons by providing a real-time, augmented view of the lesion's boundaries and essential surrounding structures. In a pediatric study, the team presented AR-assisted neuronavigation for endoscopic surgery on midline skull base pathologies in pediatric patients. Over nine years of experience, 17 endoscopic AR-assisted procedures were performed on children with lesions in the sellar and/or parasellar region. The patients (mean = 14.5 years) presented with various diagnoses. The most common one was craniopharyngiomas, at 31.2%. AR navigation was beneficial for accurately targeting lesions and determining their intraoperative extent. Postoperative MRI confirmed radical removal in 65% of oncological cases, with a mean follow-up period of 89 months. There were no fatalities, and only two cases of cerebrospinal fluid fistulas and a secondary abscess required additional surgeries. The study showed that AR offers information directly from the surgeon's field of view, which is valuable given the anatomical variability and rare pathologies in the pediatric population (85).

In the study by Goto et al., the authors introduced a novel AR navigation system incorporating three-dimensional computer graphic (3DCG) modeling for endoscopic transsphenoidal surgery, targeting sellar and parasellar tumors. This approach was developed to address the challenge of accurately identifying tumor locations and their relationships to surrounding structures, which are often distorted due to the tumor's presence. The system was evaluated across 15 patients, achieving an average usefulness score of 4.7 out of 5, indicating high effectiveness in surgical navigation. The AR system detailed 3DCG models from preoperative imaging onto real-time surgical views, offering surgeons a 3D understanding of the surgical field. Despite its advantages, the system's efficacy varied slightly among surgeons, especially for the depth perception of lesions. This emphasizes the importance of experience in interpreting AR visualizations. The study underscores the importance of the AR system for middle skull base tumors (86).

Transcranial approaches

Integrating AR with 3D printing for patient consultations uses advanced software like Hyperspaces®. In the case of complex skull base cholesteatomas, CAD software translates CT scans into a series of 3D models. These models are then linked within AR platforms and accessed via mobile devices. Models bring patient-specific clinical pathologies into a format that is understandable for patients and physicians at minimal cost (87). Constructing interactive 3D models for the skull base uses software such as Maya to create a detailed and manipulable virtual anatomic landscape. This computer-generated model enables uninterrupted observation and study, promoting better surgical foresight and patient-specific operative planning (88).

The MxR platform was implemented in a research study utilizing the Microsoft HoloLens® to visually represent the anatomical structure of the middle skull base for lateral skull base approaches. This technology creates 3D holograms using CT images of cadaver heads and temporal bones. The process incorporated a semiautomatic and manual segmentation to construct 3D models. These models were then integrated into an MxR environment, developed via C# programming, enabling the display of dynamic 3D holograms on the HoloLens headset. This platform allowed users to interact with the virtual images through gaze, voice, and gesture commands. The accuracy assessment measured the average TRE of 5.76 mm ± 0.54 (63).

Creating the 3D models to aid skull base surgery education utilizes comprehensive cadaveric dissections alongside software for generating virtual replicas. These replicas allow for an improved understanding of neuroanatomical relationships and surgical approach selection when used alongside 2D radiographic imaging. Such resources are adjuncts to preoperative planning and critical in facilitating trainee evaluation (45). Software like Mimics converts imaging data into 3D reconstructions, which are then printed to simulate the surgical procedure. The study implemented 3D printing to develop individualized cranial nerve models for skull base tumor surgery. This innovative approach was applied to three patients: two with sellar tumors and one with an acoustic neuroma. The 3D-printed models encompassed detailed representations of the skull, brain tissue, blood vessels, cranial nerves, tumors, and other significant structures. The models facilitated surgical simulation, allowing surgeons to previsualize and strategize the surgical removal of the tumor while preserving vital cranial nerves. The process involved creating 3D reconstructions from patients’ preoperative imaging data, including CT and MRI scans, using specific imaging sequences and diffusion tensor imaging-based fiber tracking. The study's findings suggest that 3D-printed cranial nerve models significantly aid in the preoperative planning of skull base surgeries, helping to minimize cranial nerve damage (68).

In a study that Pojskić et al. conducted involving 39 patients undergoing surgery, especially for anterior and middle skull base meningiomas, AR was used with iCT for navigation. Most cases, specifically 26 (66.6%), achieved gross total resection. The study confirmed high registration accuracy with an average TRE of 0.82 ± 0.37 mm. The AR technology, integrated into the surgical microscope, significantly improved surgical precision. It enabled better visualization of neurovascular structures without any reported injuries. This approach underscores the potential of AR as a valuable tool in complex skull base surgeries, facilitating safer and more effective tumor resections (89).

VR augmentation employs software capable of rendering high-resolution 3D images. These models offer a virtual operating field, enhancing the surgeon's capabilities with detailed visualizations of the surgical scenario. Rosahl et al. evaluated VR augmentation for its utility in skull base surgery. Their study included data from 110 patients with various skull base pathologies, including sellar and parasellar tumors, lesions in the temporal bone, acoustic neuromas, various other cerebellopontine angle tumors, epidermoids, brainstem lesions, glomus tumors, craniocervical meningiomas. The primary imaging data, encompassing MRI, CT, and CT angiography, facilitated the creation of a virtual operating field (VOF) with translucent surface modulation and an optional “fly-through” video mode. This innovative approach aimed to enhance image guidance in skull base procedures. The VOF was utilized with an infrared-based image guidance system, allowing real-time comparison with the patient's anatomy during surgery (74).

While integrating AR/VR/MxR/XR technologies into transcranial approaches has shown promising results, it is essential to note that the current level of evidence is primarily based on retrospective case series. Comparative studies between these advanced technologies and conventional approaches are limited. Prospective case-control studies are necessary to establish these technologies’ clinical efficacy and safety. These studies should compare outcomes such as surgical precision, complication rates, and patient recovery times between traditional methods and those enhanced with AR/VR/MxR/XR.

Posterior skull base surgery

The posterior skull base represents a challenging anatomical area, demanding surgical precision and extensive knowledge of its intricate structures. Recent advancements in surgical approaches, notably the integration of AR technologies, have significantly contributed to enhancing operative outcomes and reducing perioperative morbidity, as depicted in Table 1 (54). The application of AR in posterolateral skull base surgery is particularly promising. The technology allows for a fusion of virtual information with the surgical environment. It enables subsurface anatomy and pathology to be superimposed onto the surgeon's real-time surgical view during procedures such as vestibular schwannomas or petroclival meningiomas (90, 91). This is highly advantageous when navigating the anatomical complexities near the clivus and cerebellopontine angle. The differentiation between tumor tissue and neurovascular structures like facial nerves is paramount (53, 65).

Endoscopic approaches

The use of 3D models in endoscopic approaches to posterior cranial base lesions is limited in the current literature. In a study, a team utilized patient-specific 3D printing, AR, and surgical navigation to facilitate the transcanal endoscopic approach to the petrous apex. The process began with manual segmentation of CT images to generate 3D models. Then, the study explored the use of AR for virtual preoperative planning. This virtual exploration allowed surgeons to comprehend anatomy and decide surgical strategies. A 3D-printed physical model of the patient's temporal bone, incorporating anatomical landmarks and the cyst, was created to simulate the surgical procedure. This allowed for tactile and spatial understanding of the petrous bone, middle and posterior skull base. Navigation technology was employed during the simulation and surgery. This integrated approach, combining tactile, visual, and virtual parameters, provided a better understanding of the surgical field, potentially improving patient outcomes (11). The MxR simulator combines VR software like Unity with physical 3D-printed models can create a hybrid training environment. The physical interaction with the 3D-printed models, tracked by systems such as the HTC Vive®, offers realistic surgical simulation for training purposes. It is a cost-effective and accessible solution for education and training surgical techniques with surgical planning (75).

The study conducted by Cabrilo et al. utilized AR-assisted neuronavigation to improve the accuracy of endoscopic skull base surgery in a patient with recurrent clivus chordoma. The researchers utilized AR to display preoperatively segmented images of anatomical structures in the operating field. This enabled the imaging of the tumor and essential structures, including the carotid and vertebral arteries, in real time during surgery. The technology allowed for the modification of image transparency and superimposition depth to align with the surgical focus, greatly assisting surgical navigation. The study emphasized the benefits of incorporating AR into endoscopic techniques for posterior skull base surgeries, improving safety and accuracy (71).

Transcranial approaches

A recent study evaluated the effectiveness of a new mobile AR navigation system in guiding retrosigmoid craniotomy procedures, specifically for the precise anatomical point of the transverse-sigmoid sinus junction. The study included patients who underwent surgeries for conditions such as acoustic neuroma, trigeminal neuralgia, and hemifacial spasm. Results emphasized the system's accuracy, with a matching error averaging 2.88 ± 0.69 mm and the positioning time required being 279.71 ± 27.29 s on average. The system successfully identified and exposed the inner edge of the junction in all cases. These findings suggest that the system provides a reliable and cost-effective option for enhancing surgical efficiency in posterior skull base surgery through accurate positioning (66).

Recent studies showcased AR's integration with standard neuronavigation equipment and microscopes, showing that preoperative MRI and CT data can be effectively utilized intraoperatively to guide the resection of lesions like chordomas. These technological enhancements give surgeons a more comprehensive understanding of the deep-seated pathologies within the posterior skull base (71). In a study involving nearly 40 patients undergoing posterior skull base surgery over two years, AR technology was employed to improve surgical preparation and approach phases. Utilizing systems like the BrainLab Curve™, Surgical Theater, and a Zeiss OPMI PENTERO® 900 microscope, critical structures and points of interest were projected onto the surgical field. This AR application allowed for a delicate surgical approach, optimizing skin incision and maximizing craniotomy effectiveness by visualizing anatomical features such as the dural venous sinuses. Creating a 3D “fly-through,” alongside preoperative imaging, also facilitated a deeper understanding of the pathology. The study suggests that AR can significantly aid preoperative planning and the initial phases of skull base surgeries (67). AR's ability to project a detailed, layered image onto the surgeon's field of view significantly improves the precision of navigation around the brainstem and other vital structures (62, 66). Such technologies have facilitated mental 3D model building, leading to better situational awareness and a lower likelihood of morbidity (65). Visualizing dural venous sinuses through AR systems facilitates optimizing the skin incision and maximizing the craniotomy (92).

To enhance surgical training in selecting skull base approaches for posterior fossa tumors, a team developed open-source 3D models, focusing on seven cases identified from a skull base registry. These cases, chosen based on the feasibility of access through at least three posterior fossa craniotomies, were delicately segmented and modeled. The project created realistic 3D models for each primary operative approach and two alternatives available in a platform-neutral format for broad AR/VR and 3D printing applications. This initiative marks a significant advancement in surgical education, utilizing open-source principles to improve understanding of complex neuroanatomy and pathology (93).

The integration of various AR technologies and heads-up display systems have proven their worth in cerebellopontine angle surgeries. Despite the potential for cognitive overload due to a crowded visual field, careful management of the displayed information can mitigate such risks and avoid unintentional harassment (94). Moreover, the segmentation and labeling of critical structures facilitated by AR are vital in accurately navigating surgeries within the posterior cranial fossa. In addition to aiding in surgical planning, AR interfaces help reduce cognitive load and operative time, as reported in studies involving all types of skull base procedures (58, 69, 95).

In conclusion, posterior cranial base surgery continues to benefit from technological advancements, with AR/VR/MxR/XR as a significant innovation. These technologies aid in preoperative planning and execution and enhance the surgeon's understanding of complex anatomy, contributing to more effective surgical outcomes (Figure 3). As we continue to embrace these advancements, it is crucial to conduct further studies with more extensive series to substantiate the anecdotal evidence and refine the application of AR in clinical practice.

Figure 3

Education and training

Integrating 3D visualization and reality technologies into neurosurgical training programs has shown significant potential in enhancing the educational experience for residents (96). These advanced tools provide immersive and interactive learning environments, offering several advantages for developing theoretical knowledge and surgical skills such as craniotomy planning (97). Simulation-based training using AR, VR, MxR, and XR technologies allows students or residents to engage in realistic surgical scenarios without the risks associated with actual procedures. VR platforms enable residents to practice complex skull base surgeries in a controlled and safe environment, improving their visuospatial skills and familiarity with intricate anatomical structures.

A recent study by Lai et al. validated a VR simulation for the middle cranial skull base approach using CardinalSim software. The study involved 20 trainees from neurosurgery, otolaryngology, and head and neck surgery. The results showed significant improvements in postsimulation test scores compared to presimulation scores (P < 0.001). Trainees demonstrated statistically significant improvements in the time to complete dissections (P < .001), internal auditory canal skeletonization (P < .001), completeness of the anterior petrosectomy (P < .001), and a reduced number of injuries to critical structures (P = .001). These findings underscore the effectiveness of VR in enhancing anatomical understanding and surgical skills, providing a valuable supplement to cadaveric dissections and live surgeries (98).

Munawar et al. also introduced the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines advanced surgical simulation software with high-fidelity hardware. FIVRS allows surgeons to follow clinical workflows inside a VR environment and uses advanced rendering designs and drilling algorithms for realistic bone ablation. The system also records extensive multi-modal data for post-analysis, including eye gaze, motion, force, and video of the surgery. Preliminary data from a user study involving surgeons of various expertise levels indicated that FIVRS could differentiate between participants’ skill levels, promising future research on automatic skill assessment. Informal feedback from study participants about the system's intuitiveness and immersiveness was positive, highlighting its potential for surgical training and skill assessment (99).

Moreover, Campisi et al. conducted a systematic review of the role of AR neuronavigation in transsphenoidal surgery. The review emphasized that AR enhances surgical education, training, preoperative planning, and intraoperative navigation. AR helps minimize the anatomical challenges associated with traditional endoscopic or microscopic surgeries, improving the safety and accuracy of skull base interventions. This systematic review highlights AR's potential to significantly improve surgical outcomes by providing real-time guidance and enhancing the surgeon's or residents’ spatial awareness during procedures (100).

These studies demonstrate that reality technologies provide valuable supplements to traditional training methods such as cadaveric dissections and live surgeries. By allowing residents to practice in a risk-free, controlled environment. These tools improve their visuospatial skills, enhance their understanding of complex anatomical structures, and prepare them for actual surgical scenarios.

Recent advancements in photogrammetry and VR have significantly improved the realism and accuracy of these anatomical models (12, 13). Corvino et al. demonstrated the effectiveness of photorealistic 3D model reconstruction of the sellar region for neurosurgical anatomy training. Using photogrammetry methods on four head specimens, the researchers created high-fidelity models replicating bony structures with high realism and accuracy. The interactive nature of these models allows for a 360° self-guided tour, providing a realistic spatial perception of anatomical relationships and depths. This interactive exploration aids residents in learning the sellar region's complex anatomy from transcranial and endonasal perspectives, enhancing their understanding and preparedness for actual surgical procedures (101).

Using 3D models and AR/VR platforms facilitates collaborative learning among residents and between residents and instructors. These tools enable interactive discussions and assessments, allowing instructors to highlight specific anatomical features and surgical steps. Additionally, simulation-based assessments can objectively evaluate residents’ skills and progress, providing targeted feedback and identifying areas for improvement.

Adopting these technologies in neurosurgical education offers a transformative approach to resident training, especially in more complex areas like skull base surgery. By providing immersive, interactive, and detailed learning experiences, these tools enhance the overall educational process, better preparing residents for the complexities of skull base surgery. As these technologies evolve, their integration into neurosurgical training programs is expected to become more widespread, ultimately leading to improved surgical outcomes and patient safety.

Limitations, possible solutions, and future directions

Our study emphasizes the transformative potential of 3D visualization and reality technologies such as AR/VR systems in neurosurgical planning and intraoperative navigation in skull base surgeries. Over the years, these technologies have significantly improved surgical precision by providing patient-specific anatomical visualizations. For example, the reduction in TRE to 0.83 ± 0.44 mm with iCT-based automatic registration compared to 2.33 ± 1.30 mm with manual fiducial-based registration exemplifies the advancements in precision and reliability of these technologies (62).

However, several limitations need to be addressed. One major issue is the “crowding of objects” in these applications, where excessive objects can cause cognitive overload for neurosurgeons and reality algorithms. This can be mitigated by customizing reality algorithm interfaces to prioritize and highlight essential information, reduce cognitive overload, and improve focus during surgeries. Future research could focus on developing more intuitive user interfaces that allow for customizable displays according to the surgeon's preferences and the specific requirements of the surgery.

The weight of head-mounted devices (HMDs) can pose a significant issue regarding their use. Especially during lengthy skull base procedures, HMDs may lead to discomfort and fatigue for neurosurgeons, potentially affecting performance, precision, and patient outcomes. Addressing this limitation requires ergonomic advancements to create lighter and more comfortable HMDs. Additionally, exploring alternative display methods, such as lightweight glasses or integrated operating room displays, might alleviate some discomfort associated with HMDs (102, 103).

High-quality AR devices are not universally available. Their accessibility is limited by the resources of the healthcare institution, which has prevented widespread adoption in various medical settings. The initial investment and maintenance costs for advanced AR systems are substantial, creating a financial barrier for many hospitals. High-quality AR devices range from $3,000 to $10,000, while VR systems may cost between $1,000 and $5,000. Additionally, segmentation software licenses can cost upwards of $5,000–$20,000 annually. The time required to generate accurate models can vary from a few hours to several days, depending on the complexity of the case and the proficiency of the surgeon and the computer engineer. Future directions should consider developing cost-effective solutions and economic models to support integrating these technologies into routine clinical practice. This might include collaborative funding models, government grants, or partnerships with technology developers to reduce the financial burden on healthcare institutions.

Current MR scanning technology limits precise anatomical segmentation due to its resolution constraints. The insufficiencies in segmentation accuracy hinder detailed anatomical visualization. However, advancements in high-resolution imaging modalities and rendering technologies supported by AI are expected to overcome these challenges. Future developments should focus on creating open-access, high-resolution data sets to facilitate more accurate segmentations and enhance the utility of 3D models in surgical planning.

Integrating HMDs with existing surgical navigation systems can be complex, requiring sophisticated software and hardware alignment to ensure accurate and synchronized AR overlays. Technical issues such as calibration errors, latency, and software glitches can disrupt the surgical workflow, reducing the technology's reliability during critical moments (104).

It is essential to acknowledge the phenomenon of brain shift that can occur during transcranial approaches, particularly in large tumor resections. Brain shift refers to the displacement of brain tissues during surgery, which can progressively reduce the accuracy of navigation systems and overlays provided by reality technologies. As the operation proceeds, the preoperative imaging data may no longer accurately represent the intraoperative anatomy, leading to potential discrepancies. This can create a false sense of security and pose significant risks when coupled with intention bias. Ragnhildstveit et al. highlighted the issue of brain shift in glioma surgeries, noting that AR can help disclose and compensate for intraoperative brain shift. Still, the effectiveness varies with the accuracy of the registration methods used. For instance, they reported that the TRE for AR systems varied significantly, with values ranging from 0.90 mm to 8.55 mm depending on the specific technique and application. Continuous intraoperative imaging and regular recalibration of navigation systems are crucial to mitigate these risks and ensure the highest level of surgical precision and patient safety. The study also emphasizes the need for consistency in AR workflows and the development of standardized measures to evaluate the accuracy and clinical utility of AR systems (105).

Continuous technological advancements and rigorous testing are crucial to improving these systems’ robustness and reliability. Collaboration with software developers to enhance compatibility and reduce latency is essential (106). Moreover, with ongoing progress in AI and machine learning, significant enhancements in automating and perfecting the procedure of generating 3D models are possible. These algorithms will result in increased efficiency and instant implementation (107).

The effective use of AR systems also necessitates adequate training and familiarity, with surgeons needing extensive training to seamlessly integrate these technologies into their workflow. Standardized training programs and continuous education are vital to help overcome the learning curve associated with AR/VR systems. Establishing comprehensive training curricula and simulation-based practice sessions can ensure that surgeons are well-prepared to utilize these advanced tools effectively. Additionally, incorporating AR/VR training into neurosurgical residency programs could help future surgeons become proficient with these technologies from the outset of their careers.

Data management and security pose additional challenges, especially when handling large volumes of imaging data. Developing robust data management systems that ensure compliance with health information regulations is crucial to address these concerns. Secure data storage solutions and encryption methods can protect patient information, allowing seamless surgical planning and navigation access. Another critical limitation is the potential for attention bias, especially in MxR setups. Surgeons might become overly focused on digital overlays, risking the oversight of critical real-world anatomical details. Addressing this issue involves developing interfaces prioritizing essential information and mitigating cognitive overload.

While 3D printing offers the distinct advantage of creating tangible, patient-specific physical models that surgeons can manipulate, AR, VR, MxR, and XR technologies take surgical planning and execution to an entirely new level. These advanced visualization tools provide dynamic, interactive environments that can be integrated directly into the surgical setting. Unlike static 3D printed models, AR and MxR technologies can overlay digital information onto the patient in real time during surgery, enhancing the surgeon's spatial understanding of complex anatomical structures. This capability allows for real-time adjustments and immersive simulations, which aid in preoperative planning and significantly improve intraoperative navigation. By offering a comprehensive view combining physical and digital elements, these technologies can help reduce surgical complications and improve patient outcomes, ultimately pushing the boundaries of precision and safety in skull base neurosurgery.

Promoting collaboration among different sectors is crucial to fully leverage the advantages of 3D modeling and AR/VR technology in clinical practice. Recent advances in photogrammetry (31, 108), computer vision, and simulation technologies (12, 13) can enhance the immersive potential and utilization of personalized 3D neurosurgical models in education, research, and practice (35, 109). Hence, we anticipate the transformation of skull base neurosurgery by integrating advanced technologies and improved computational capabilities. This transformation will bring a new era of tailored surgical interventions in skull base surgery.

Integrating novel technologies into clinical practice should follow established frameworks, such as the IDEAL (Idea, Development, Exploration, Assessment, and Long-term Study) collaboration framework. According to this framework, the respective technologies discussed in this review are primarily in the exploration and assessment stage, with some preliminary evidence supporting their utility. However, comprehensive assessment through well-designed prospective studies is essential to move these technologies toward broader clinical adoption and long-term evaluation. Future research should improve cost-effectiveness, computational efficiency, and user-friendly interfaces to facilitate wider adoption. Finally, comprehensive training programs and seamless integration into existing surgical workflows are vital for maximizing the benefits of 3D models and AR/VR systems. By addressing these challenges, we can significantly enhance the precision, reliability, and accessibility of these technologies, leading to improved surgical outcomes and patient safety in skull base neurosurgery.

Conclusion

Incorporating patient-specific 3D models into surgical planning signifies a fundamental change in how complicated surgical procedures are approached. These models enhance comprehension of complex anatomical connections and allow surgeons to practice and predict different surgical situations. When used in conjunction with AR/VR/MxR/XR settings, these models become highly effective tools for improving communication among the surgical team and patients while acting as exceptional instructional materials. Maximum effectiveness can be obtained in approach selection, patient positioning, craniotomy placement, anti-target avoidance, and spatial relationship of neurovascular structures. While these advancements herald a new era of precision in surgical planning and execution, it is essential to recognize the challenges associated with their implementation. It is crucial to develop more cost-effective solutions and financial models to support their integration into routine clinical practice. Higher-resolution and more accurate 3D visualization and reality technologies must also be created. Advances in computational power and algorithms for faster processing could help streamline this process. In order to properly utilize the advantages of these advancements in enhancing patient care, it is imperative to consistently improve the technology, efficiently control expenses, offer education, and perform research to overcome current limitations. To address the current limitations and enhance the integration of these technologies into clinical practice, future research should focus on conducting prospective case-control studies that provide high-level evidence on the comparative efficacy of AR/VR/MxR/XR-enhanced techniques vs. conventional approaches. Additionally, adherence to frameworks such as IDEAL will facilitate structured evaluation and integration, ensuring these innovations improve surgical outcomes effectively and safely.

Statements

Author contributions

II: Data curation, Formal Analysis, Supervision, Visualization, Writing – review & editing. EC: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. BB: Data curation, Writing – review & editing. OT: Data curation, Writing – review & editing. SH: Conceptualization, Data curation, Investigation, Methodology, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1.

    ZadaGBaşkayaMKShahMV. Introduction: surgical management of skull base meningiomas. Neurosurg Focus. (2017) 43(VideoSuppl2). 10.3171/2017.10.FocusVid.Intro

  • 2.

    CaversaccioMLanglotzFNolteLPHäuslerR. Impact of a self-developed planning and self-constructed navigation system on skull base surgery: 10 years experience. Acta Otolaryngol. (2007) 127(4):4037. 10.1080/00016480601002104

  • 3.

    SayyahmelliSDoganIWielandAMPyleMBaşkayaMK. Aggressive, multidisciplinary staged microsurgical resection of a giant cervicomedullary junction chordoma. J Neurol Surg B Skull Base. (2019) 80(S 04):S3789. 10.1055/s-0039-1695062

  • 4.

    ShaoXYuanQQianDYeZChenGle ZhuangKet alVirtual reality technology for teaching neurosurgery of skull base tumor. BMC Med Educ. (2020) 20(1):3. 10.1186/s12909-019-1911-5

  • 5.

    KholiefAAliAElwanySAhmedSYoussefAZahranM. Evaluation of the three-dimensional endoscopy in basic and extended nasal procedures: a clinical and cadaveric study. Int Arch Otorhinolaryngol. (2023) 27(04):e6204. 10.1055/s-0042-1759604

  • 6.

    Zawy AlsofySNakamuraMSuleimanASakellaropoulouIWelzel SaraviaHShalamberidzeDet alCerebral anatomy detection and surgical planning in patients with anterior skull base meningiomas using a virtual reality technique. J Clin Med. (2021) 10(4):681. 10.3390/jcm10040681

  • 7.

    CagiltayNEOzcelikEIsikayIHanaliogluSSusluAEYucelTet alThe effect of training, used-hand, and experience on endoscopic surgery skills in an educational computer-based simulation environment (ECE) for endoneurosurgery training. Surg Innov. (2019) 26(6):72537. 10.1177/1553350619861563

  • 8.

    Gonzalez-RomoNIHanaliogluSMignucci-JiménezGAbramovIXuYPreulMC. Anatomic depth estimation and 3-dimensional reconstruction of microsurgical anatomy using monoscopic high-definition photogrammetry and machine learning. Oper Neurosurg. (2023) 24(4):43244. 10.1227/ons.0000000000000544

  • 9.

    GuoXYHeZQDuanHLinFHZhangGHZhangXHet alThe utility of 3-dimensional-printed models for skull base meningioma surgery. Ann Transl Med. (2020) 8(6):370370. 10.21037/atm.2020.02.28

  • 10.

    PacioneDTanweerOBermanPHarterDH. The utility of a multimaterial 3D printed model for surgical planning of complex deformity of the skull base and craniovertebral junction. J Neurosurg. (2016) 125(5):11947. 10.3171/2015.12.JNS151936

  • 11.

    BarberSRWongKKanumuriVKiringodaRKempfleJRemenschneiderAKet alAugmented reality, surgical navigation, and 3D printing for transcanal endoscopic approach to the petrous apex. OTO Open. (2018) 2(4). 10.1177/2473974X18804492

  • 12.

    GursesMEGungorAGökalpEHanaliogluSKaratas OkumusSYTatarIet alThree-dimensional modeling and augmented and virtual reality simulations of the white matter anatomy of the cerebrum. Operative Neurosurgery. (2022) 23(5):35566. 10.1227/ons.0000000000000361

  • 13.

    GursesMEGungorARahmanovSGökalpEHanaliogluSBerkerMet alThree-dimensional modeling and augmented reality and virtual reality simulation of fiber dissection of the cerebellum and brainstem. Oper Neurosurg. (2022) 23(5):34554. 10.1227/ons.0000000000000358

  • 14.

    RandazzoMPisapiaJSinghNThawaniJ. 3D printing in neurosurgery: a systematic review. Surg Neurol Int. (2016) 7(34):801. 10.4103/2152-7806.194059

  • 15.

    NaftulinJSKimchiEYStreamlinedCS. Inexpensive 3D printing of the brain and skull. PLoS One. (2015) 10(8):e0136198. 10.1371/journal.pone.0136198

  • 16.

    BaskaranVŠtrkaljGŠtrkaljMDi IevaA. Current applications and future perspectives of the use of 3D printing in anatomical training and neurosurgery. Front Neuroanat. (2016) 10:69. 10.3389/fnana.2016.00069

  • 17.

    Martín-NoguerolTPaulano-GodinoFRiascosRFCalabia-del-CampoJMárquez-RivasJLunaA. Hybrid computed tomography and magnetic resonance imaging 3D printed models for neurosurgery planning. Ann Transl Med. (2019) 7(22):684684. 10.21037/atm.2019.10.109

  • 18.

    LeeCWongGKC. Virtual reality and augmented reality in the management of intracranial tumors: a review. J Clin Neurosci. (2019) 62:1420. 10.1016/j.jocn.2018.12.036

  • 19.

    Gómez AmarilloDFOrdóñez-RubianoEGRamírez-SanabriaADFigueredoLFVargas-OsorioMPRamonJFet alAugmented reality for intracranial meningioma resection: a mini-review. Front Neurol. (2023) 14:1269014. 10.3389/fneur.2023.1269014

  • 20.

    BegagićEBečulićHPugonjaRMemićZBalogunSDžidić-KrivićAet alAugmented reality integration in skull base neurosurgery: a systematic review. Medicina (B Aires). (2024) 60(2):335. 10.3390/medicina60020335

  • 21.

    JudyBFMentaAPakHLAzadTDWithamTF. Augmented reality and virtual reality in spine surgery. Neurosurg Clin N Am. (2024) 35(2):20716. 10.1016/j.nec.2023.11.010

  • 22.

    NajeraELockardGSaez-AlegreMPiperKJeanWC. Mixed reality in neurosurgery: redefining the paradigm for arteriovenous malformation planning and navigation to improve patient outcomes. Neurosurg Focus. (2024) 56(1):E5. 10.3171/2023.10.FOCUS23637

  • 23.

    BuwaiderAEl-HajjVGMahdiOAIopAGhariosMde GiorgioAet alExtended reality in cranial and spinal neurosurgery—a bibliometric analysis. Acta Neurochir (Wien). (2024) 166(1):194. 10.1007/s00701-024-06072-4

  • 24.

    AydinSOBarutOYilmazMOSahinBAkyoldasGAkgunMYet alUse of 3-dimensional modeling and augmented/virtual reality applications in microsurgical neuroanatomy training. Oper Neurosurg. (2023) 24(3):31823. 10.1227/ons.0000000000000524

  • 25.

    TagaytayanRKelemenASik-LanyiC. Augmented reality in neurosurgery. Arch Med Sci. (2018) 14(3):5728. 10.5114/aoms.2016.58690

  • 26.

    KazemzadehKAkhlaghdoustMZaliA. Advances in artificial intelligence, robotics, augmented and virtual reality in neurosurgery. Front Surg. (2023) 10:1241923. 10.3389/fsurg.2023.1241923

  • 27.

    SatoMTateishiKMurataHKinTSuenagaJTakaseHet alThree-dimensional multimodality fusion imaging as an educational and planning tool for deep-seated meningiomas. Br J Neurosurg. (2018) 32(5):50915. 10.1080/02688697.2018.1485877

  • 28.

    YamaokaHSugawaraTHirabayashiTWanibuchiMMaeharaT. A three-dimensional anterior and middle cranial fossa model for skull base surgical training with two layers of the colored dura mater. World Neurosurg. (2023) 176:e57586. 10.1016/j.wneu.2023.05.105

  • 29.

    TomlinsonSBHendricksBKCohen-GadolA. Immersive three-dimensional modeling and virtual reality for enhanced visualization of operative neurosurgical anatomy. World Neurosurg. (2019) 131:31320. 10.1016/j.wneu.2019.06.081

  • 30.

    MoronePJShahKJHendricksBKCohen-GadolAA. Virtual, 3-dimensional temporal bone model and its educational value for neurosurgical trainees. World Neurosurg. (2019) 122:e14125. 10.1016/j.wneu.2018.11.074

  • 31.

    HanaliogluSRomoNGMignucci-JiménezGTuncOGursesMEAbramovIet alDevelopment and validation of a novel methodological pipeline to integrate neuroimaging and photogrammetry for immersive 3D cadaveric neurosurgical simulation. Front Surg. (2022) 9:878378. 10.3389/fsurg.2022.878378

  • 32.

    MandoliniMBrunziniAFaccoGMazzoliAForcelleseAGiganteA. Comparison of three 3D segmentation software tools for hip surgical planning. Sensors. (2022) 22(14):5242. 10.3390/s22145242

  • 33.

    FickTvan DoormaalJAMTosicLvan ZoestRJMeulsteeJWHovingEWet alFully automatic brain tumor segmentation for 3D evaluation in augmented reality. Neurosurg Focus. (2021) 51(2):E14. 10.3171/2021.5.FOCUS21200

  • 34.

    SahinBAydinSOYilmazMOSaygiTHanaliogluSAkyoldasGet alContralateral vs. ipsilateral approach to superior hypophyseal artery aneurysms: an anatomical study and morphometric analysis. Front Surg. (2022) 9:915310. 10.3389/fsurg.2022.915310

  • 35.

    HanaliogluSGursesMEMignucci-JiménezGGonzález-RomoNIWinklerEAPreulMCet alInfragalenic triangle as a gateway to dorsal midbrain and posteromedial thalamic lesions: descriptive and quantitative analysis of microsurgical anatomy. J Neurosurg. (2024) 140(3):86679. 10.3171/2023.6.JNS222871

  • 36.

    JeanWCPiperKFelbaumDRSaez-AlegreM. The inaugural “century” of mixed reality in cranial surgery: virtual reality rehearsal/augmented reality guidance and its learning curve in the first 100-case, single-surgeon series. Oper Neurosurg. (2024) 26(1):2837. 10.1227/ons.0000000000000908

  • 37.

    Gonzalez-RomoNIMignucci-JiménezGHanaliogluSGursesMEBahadirSXuYet alVirtual neurosurgery anatomy laboratory: a collaborative and remote education experience in the metaverse. Surg Neurol Int. (2023) 14:90. 10.25259/SNI_162_2023

  • 38.

    JeanWCFelbaumDR. The use of augmented reality to improve safety of anterior petrosectomy: two-dimensional operative video. World Neurosurg. (2021) 146:162. 10.1016/j.wneu.2020.11.054

  • 39.

    JeanWCSackKDTsenAR. Augmented-reality template guided transorbital approach for intradural tumors. Neurosurg Focus Video. (2022) 6(1):V3. 10.3171/2021.10.FOCVID21172

  • 40.

    JeanWCHuangMCFelbaumDR. Optimization of skull base exposure using navigation-integrated, virtual reality templates. J Clin Neurosci. (2020) 80:12530. 10.1016/j.jocn.2020.08.018

  • 41.

    LiHLuLLiNZiLWenQ. Application of three-dimensional (3D) printing in neurosurgery. Adv Mater Sci Eng. (2022) 2022:113. 10.1155/2022/8015625

  • 42.

    VakhariaVNVakhariaNNHillCS. Review of 3-dimensional printing on cranial neurosurgery simulation training. World Neurosurg. (2016) 88:18898. 10.1016/j.wneu.2015.12.031

  • 43.

    RengierFMehndirattaAvon Tengg-KobligkHZechmannCMUnterhinninghofenRKauczorHUet al3D printing based on imaging data: review of medical applications. Int J Comput Assist Radiol Surg. (2010) 5(4):33541. 10.1007/s11548-010-0476-x

  • 44.

    WaranVDevarajPHari ChandranTMuthusamyKARathinamAKBalakrishnanYKet alThree-dimensional anatomical accuracy of cranial models created by rapid prototyping techniques validated using a neuronavigation station. J Clin Neurosci. (2012) 19(4):5747. 10.1016/j.jocn.2011.07.031

  • 45.

    GraffeoCSPerryACarlstromLPPeris-CeldaMAlexanderADickensHJet al3D printing for complex cranial surgery education: technical overview and preliminary validation study. J Neurol Surg B Skull Base. (2022) 83(S 02):e10512. 10.1055/s-0040-1722719

  • 46.

    PlouPSerioliSLeonelLCPCAlexanderAYAgostiEVilanyLet alSurgical anatomy and approaches of the anterior cranial Fossa from a transcranial and endonasal perspective. Cancers (Basel). (2023) 15(9):2587. 10.3390/cancers15092587

  • 47.

    FerrariMMattavelliDSchreiberANicolaiP. Macroscopic and endoscopic anatomy of the anterior skull base and adjacent structures. Adv Otorhinolaryngol. (2020) 84:112. 10.1159/000457921

  • 48.

    GardnerPAKassamABThomasASnydermanCHCarrauRLMintzAHet alEndoscopic endonasal resection of anterior cranial base meningiomas. Neurosurgery. (2008) 63(1):3654. 10.1227/01.NEU.0000335069.30319.1E

  • 49.

    McleanTFitzgeraldCEaganALongSMCracchioloJShahJet alUnderstanding frozen section histopathology in sinonasal and anterior skull base malignancy and proposed reporting guidelines. J Surg Oncol. (2023) 128(8):124350. 10.1002/jso.27429

  • 50.

    Silveira-BertazzoGLiRRejane-HeimTCMartinez-PerezRAlbonette-FelicioTSholkamy DiabAGet alEndoscopic approaches to skull base malignancies affecting the anterior fossa. J Neurosurg Sci. (2021) 65(2):169–80. 10.23736/S0390-5616.21.05170-5

  • 51.

    DixonBJDalyMJChanHVescanAWitterickIJIrishJC. Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery. Laryngoscope. (2014) 124(4):8539. 10.1002/lary.24385

  • 52.

    ThavarajasingamSGVardanyanRArjomandi RadAThavarajasingamAKhachikyanAMendozaNet alThe use of augmented reality in transsphenoidal surgery: a systematic review. Br J Neurosurg. (2022) 36(4):45771. 10.1080/02688697.2022.2057435

  • 53.

    JeanWC. The paramedian supracerebellar infratentorial approach: 2-dimensional operative video. Oper Neurosurg. (2024) 26(1):100100. 10.1227/ons.0000000000000913

  • 54.

    Salgado-LopezLOemkeHFengRMatsoukasSMoccoJShrivastavaRet alIntraoperative use of heads-up display in skull base surgery. Neurosurg Focus Video. (2022) 6(1):V2. 10.3171/2021.10.FOCVID21177

  • 55.

    ZeigerJCostaABedersonJShrivastavaRKIloretaAMC. Use of mixed reality visualization in endoscopic endonasal skull base surgery. Oper Neurosurg. (2020) 19(1):4352. 10.1093/ons/opz355

  • 56.

    LaiMSkyrmanSShanCBabicDHomanREdströmEet alFusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS One. (2020) 15(1):e0227312. 10.1371/journal.pone.0227312

  • 57.

    CitardiMJYaoWLuongA. Next-generation surgical navigation systems in Sinus and skull base surgery. Otolaryngol Clin North Am. (2017) 50(3):61732. 10.1016/j.otc.2017.01.012

  • 58.

    LiLYangJChuYWuWXueJLiangPet alA novel augmented reality navigation system for endoscopic sinus and skull base surgery: a feasibility study. PLoS One. (2016) 11(1):e0146996. 10.1371/journal.pone.0146996

  • 59.

    PorrasJLRowanNRMukherjeeD. Endoscopic endonasal skull base surgery complication avoidance: a contemporary review. Brain Sci. (2022) 12(12):1685. 10.3390/brainsci12121685

  • 60.

    LaiCLuiJTChenJMLinVYAgrawalSKBlevinsNHet alHigh-fidelity virtual reality simulation for the middle cranial Fossa approach—modules for surgical rehearsal and education. Oper Neurosurg. (2022) 23(6):50513. 10.1227/ons.0000000000000387

  • 61.

    JeanWCSinghA. Expanded endoscopic endonasal transtuberculum approach for tuberculum Sellae meningioma: operative video with 360-degree fly-through and surgical rehearsal in virtual reality: 2-dimensional operative video. Oper Neurosurg. (2020) 19(2):E17980. 10.1093/ons/opaa017

  • 62.

    CarlBBoppMVoellgerBSaßBNimskyC. Augmented reality in transsphenoidal surgery. World Neurosurg. (2019) 125:e87383. 10.1016/j.wneu.2019.01.202

  • 63.

    McJunkinJLJiramongkolchaiPChungWSouthworthMDurakovicNBuchmanCAet alDevelopment of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. (2018) 39(10):e113742. 10.1097/MAO.0000000000001995

  • 64.

    KawamataTIsekiHShibasakiTHoriT. Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note. Neurosurgery. (2002) 50(6):13937. 10.1097/00006123-200206000-00038

  • 65.

    OlexaJTrangAFlessnerRLabibM. Case report: use of novel AR registration system for presurgical planning during vestibular schwannoma resection surgery. Front Surg. (2024) 11:1304039. 10.3389/fsurg.2024.1304039

  • 66.

    HongWHuangXChenZHuangSWenYHeBLiuYLinY. A low-cost mobile-based augmented reality neuronavigation system for retrosigmoid craniotomy. Oper Neurosurg. (2023) 26(6):695701. 10.1227/ons.0000000000001026

  • 67.

    SchwamZGKaulVFBuDDIloretaACBedersonJBPerezEet alThe utility of augmented reality in lateral skull base surgery: a preliminary report. Am J Otolaryngol. (2021) 42(4):102942. 10.1016/j.amjoto.2021.102942

  • 68.

    LinJZhouZGuanJZhuYLiuYYangZet alUsing three-dimensional printing to create individualized cranial nerve models for skull base tumor surgery. World Neurosurg. (2018) 120:e14252. 10.1016/j.wneu.2018.07.236

  • 69.

    MascitelliJRSchlachterLChartrainAGOemkeHGilliganJCostaABet alNavigation-linked heads-up display in intracranial surgery: early experience. Oper Neurosurg. (2018) 15(2):18493. 10.1093/ons/opx205

  • 70.

    PrismanEDalyMJChanHSiewerdsenJHVescanAIrishJC. Real-time tracking and virtual endoscopy in cone-beam CT-guided surgery of the sinuses and skull base in a cadaver model. Int Forum Allergy Rhinol. (2011) 1(1):707. 10.1002/alr.20007

  • 71.

    CabriloISarrafzadehABijlengaPLandisBNSchallerK. Augmented reality-assisted skull base surgery. Neurochirurgie. (2014) 60(6):3046. 10.1016/j.neuchi.2014.07.001

  • 72.

    OishiMFukudaMYajimaNYoshidaKTakahashiMHiraishiTet alInteractive presurgical simulation applying advanced 3D imaging and modeling techniques for skull base and deep tumors. J Neurosurg. (2013) 119(1):94105. 10.3171/2013.3.JNS121109

  • 73.

    OishiMFukudaMIshidaGSaitoAHiraishiTFujiiY. Presurgical simulation with advanced 3-dimensional multifusion volumetric imaging in patients with skull base tumors. Oper Neurosurg. (2011) 68:ons18899. 10.1227/NEU.0b013e318207b3ad

  • 74.

    RosahlSGharabaghiAHubbeUShahidiRSamiiM. Virtual reality augmentation in skull base surgery. Skull Base. (2006) 16(2):05966. 10.1055/s-2006-931620

  • 75.

    BarberSRJainSSonYChangEH. Virtual functional endoscopic Sinus surgery simulation with 3D-printed models for mixed-reality nasal endoscopy. Otolaryngol Head Neck Surg. (2018) 159(5):9337. 10.1177/0194599818797586

  • 76.

    OnishiKFumiyamaSMikiYNonakaMKoedaMNoborioH. Study on the development of augmented-reality navigation system for transsphenoidal surgery. In: Human-Computer Interaction. Human Values and Quality of Life: Thematic Area, HCI 2020, Held as Part of the 22nd International Conference, HCII 2020, July 19-24, Proceedings, Part III 22. Copenhagen, Denmark: Springer International Publishing (2020). p. 62338. 10.1007/978-3-030-49065-2_43

  • 77.

    BongJHSongHOhYParkNKimHParkS. Endoscopic navigation system with extended field of view using augmented reality technology. Int J Med Robot Comput Assist Surg. (2018) 14(2):e1886. 10.1002/rcs.1886

  • 78.

    HannanCJKelleherEJavadpourM. Methods of skull base repair following endoscopic endonasal tumor resection: a review. Front Oncol. (2020) 10:1614. 10.3389/fonc.2020.01614

  • 79.

    MirotaDJWangHTaylorRHIshiiMGalliaGLHagerGD. A system for video-based navigation for endoscopic endonasal skull base surgery. IEEE Trans Med Imaging. (2012) 31(4):96376. 10.1109/TMI.2011.2176500

  • 80.

    CitardiMJAgbetobaABigcasJLuongA. Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. Int Forum Allergy Rhinol. (2016) 6(5):5238. 10.1002/alr.21702

  • 81.

    Zawy AlsofySSakellaropoulouINakamuraMEweltCSalmaALewitzMet alImpact of virtual reality in arterial anatomy detection and surgical planning in patients with unruptured anterior communicating artery aneurysms. Brain Sci. (2020) 10(12):963. 10.3390/brainsci10120963

  • 82.

    JeanWC. Mini-pterional craniotomy and extradural clinoidectomy for clinoid meningioma: optimization of exposure using augmented reality template: 2-dimensional operative video. Oper Neurosurg. (2020) 19(6):E610E610. 10.1093/ons/opaa238

  • 83.

    OishiMFukudaMIshidaGSaitoAHiraishiTFujiiY. Prediction of the microsurgical window for skull-base tumors by advanced three-dimensional multi-fusion volumetric imaging. Neurol Med Chir (Tokyo). (2011) 51(3):2017. 10.2176/nmc.51.201

  • 84.

    BoppMHASaßBPojskićMCorrFGrimmDKemmlingAet alUse of neuronavigation and augmented reality in transsphenoidal pituitary adenoma surgery. J Clin Med. (2022) 11(19):5590. 10.3390/jcm11195590

  • 85.

    PennacchiettiVStoelzelKTietzeALankesESchaumannAUeckerFCet alFirst experience with augmented reality neuronavigation in endoscopic assisted midline skull base pathologies in children. Childs Nerv Syst. (2021) 37(5):152534. 10.1007/s00381-021-05049-3

  • 86.

    GotoYKawaguchiAInoueYNakamuraYOyamaYTomiokaAet alEfficacy of a novel augmented reality navigation system using 3D computer graphic modeling in endoscopic transsphenoidal surgery for sellar and parasellar tumors. Cancers (Basel). (2023) 15(7):2148. 10.3390/cancers15072148

  • 87.

    ChengKMukherjeePCurthoysI. Development and use of augmented reality and 3D printing in consulting patient with complex skull base cholesteatoma. Virtual Phys Prototyp. (2017) 12(3):2418. 10.1080/17452759.2017.1310050

  • 88.

    KakizawaYHongoKRhotonAL. Construction of a three-dimensional interactive model of the skull base and cranial nerves. Neurosurgery. (2007) 60(5):90110. 10.1227/01.NEU.0000255422.86054.51

  • 89.

    PojskićMBoppMHASaβBCarlBNimskyC. Microscope-based augmented reality with intraoperative computed tomography-based navigation for resection of skull base meningiomas in consecutive series of 39 patients. Cancers (Basel). (2022) 14(9):2302. 10.3390/cancers14092302

  • 90.

    PanesarSSMagnettaMMukherjeeDAbhinavKBranstetterBFGardnerPAet alPatient-specific 3-dimensionally printed models for neurosurgical planning and education. Neurosurg Focus. (2019) 47(6):E12. 10.3171/2019.9.FOCUS19511

  • 91.

    JeanWCYangYSrivastavaATaiAXHerur-RamanAKimHJet alStudy of comparative surgical exposure to the petroclival region using patient-specific, petroclival meningioma virtual reality models. Neurosurg Focus. (2021) 51(2):E13. 10.3171/2021.5.FOCUS201036

  • 92.

    ToaderCEvaLTataruCICovache-BusuiocRABratuBGDumitrascuDIet alFrontiers of cranial base surgery: integrating technique, technology, and teamwork for the future of neurosurgery. Brain Sci. (2023) 13(10):1495. 10.3390/brainsci13101495

  • 93.

    CarlstromLPGraffeoCSPerryANguyenBTAlexanderAEHolroydMJet alThree-dimensional modeling for augmented and virtual reality–based posterior Fossa approach selection training: technical overview of novel open-source materials. Oper Neurosurg. (2022) 22(6):40924. 10.1227/ons.0000000000000154

  • 94.

    MeolaACutoloFCarboneMCagnazzoFFerrariMFerrariV. Augmented reality in neurosurgery: a systematic review. Neurosurg Rev. (2017) 40(4):53748. 10.1007/s10143-016-0732-9

  • 95.

    Besharati TabriziLMahvashM. Augmented reality–guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg. (2015) 123(1):20611. 10.3171/2014.9.JNS141001

  • 96.

    BernardFGalletCFournierHDLaccoureyeLRochePHTroudeL. Toward the development of 3-dimensional virtual reality video tutorials in the French neurosurgical residency program. Example of the combined petrosal approach in the French college of neurosurgery. Neurochirurgie. (2019) 65(4):1527. 10.1016/j.neuchi.2019.04.004

  • 97.

    HaiderSAirEKouZRockJ. Anatomic review in 3D augmented reality alters craniotomy planning among residents. World Neurosurg. (2024) 184:e5249. 10.1016/j.wneu.2024.01.163

  • 98.

    LaiCLuiJTde Lotbiniere-BassettMChenJMLinVYAgrawalSKet alVirtual reality simulation for the middle cranial fossa approach: a validation study. Oper Neurosurg. (2024) 26(1):7885. 10.1227/ons.0000000000000915

  • 99.

    MunawarALiZNagururuNTrakimasDKazanzidesPTaylorRHet alFully immersive virtual reality for skull-base surgery: surgical training and beyond. Int J Comput Assist Radiol Surg. (2023) 19(1):519. 10.1007/s11548-023-02956-5

  • 100.

    CampisiBMCostanzoRGulinoVAvalloneCNotoMBonosiLet alThe role of augmented reality neuronavigation in transsphenoidal surgery: a systematic review. Brain Sci. (2023) 13(12):1695. 10.3390/brainsci13121695

  • 101.

    CorvinoSPiazzaASpirievTTafutoRCorrivettiFSolariDet alThe sellar region as seen from transcranial and endonasal perspectives: exploring bony landmarks through new surface photorealistic three-dimensional model reconstruction for neurosurgical anatomy training. World Neurosurg. (2024) 185:e36775. 10.1016/j.wneu.2024.02.022

  • 102.

    DemetzMAbramovicAKrigersABauerMLenerSPinggeraDet alCadaveric study of ergonomics and performance using a robotic exoscope with a head-mounted display in spine surgery. J Robot Surg. (2024) 18(1):6. 10.1007/s11701-023-01777-7

  • 103.

    AbramovicADemetzMKrigersABauerMLenerSPinggeraDet alSurgeon’s comfort: the ergonomics of a robotic exoscope using a head-mounted display. Brain Spine. (2022) 2:100855. 10.1016/j.bas.2021.100855

  • 104.

    Bocanegra-BecerraJEAcha SánchezJLCastilla-EncinasAMRios-GarciaWMendietaCDQuiroz-MarceloDAet alToward a frontierless collaboration in neurosurgery: a systematic review of remote augmented and virtual reality technologies. World Neurosurg. (2024) 187:11421. 10.1016/j.wneu.2024.04.048

  • 105.

    RagnhildstveitALiCZimmermanMHMamalakisMCurryVNHolleWet alIntra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg. (2023) 10:1245851. 10.3389/fsurg.2023.1245851

  • 106.

    HanaliogluSGursesMEBaylarovBTuncOIsikayICagiltayNEet alQuantitative assessment and objective improvement of the accuracy of neurosurgical planning through digital patient-specific 3D models. Front Surg. (2024) 11:1386091. 10.3389/fsurg.2024.1386091

  • 107.

    CekicEPinarEPinarMDagcinarA. Deep learning-assisted segmentation and classification of brain tumor types on magnetic resonance and surgical microscope images. World Neurosurg. (2024) 182:e196–204. 10.1016/j.wneu.2023.11.073

  • 108.

    GursesMEGungorAHanaliogluSYaltirikCKPostukHCBerkerMet alQlone®: a simple method to create 360-degree photogrammetry-based 3-dimensional model of cadaveric specimens. Oper Neurosurg. (2021) 21(6):E48893. 10.1093/ons/opab355

  • 109.

    CiklaUSahinBHanaliogluSAhmedASNiemannDBaskayaMK. A novel, low-cost, reusable, high-fidelity neurosurgical training simulator for cerebrovascular bypass surgery. J Neurosurg. (2019) 130(5):166371. 10.3171/2017.11.JNS17318

Summary

Keywords

augmented (virtual) reality, mixed reality, 3D printing, 3D model, skull base, neurosurgery

Citation

Isikay I, Cekic E, Baylarov B, Tunc O and Hanalioglu S (2024) Narrative review of patient-specific 3D visualization and reality technologies in skull base neurosurgery: enhancements in surgical training, planning, and navigation. Front. Surg. 11:1427844. doi: 10.3389/fsurg.2024.1427844

Received

07 May 2024

Accepted

02 July 2024

Published

16 July 2024

Volume

11 - 2024

Edited by

Muhammet Enes Gurses, University of Miami, United States

Reviewed by

Vejay Niranjan Vakharia, Alder Hey Children’s NHS Foundation Trust, United Kingdom

Mirza Pojskic, University Hospital of Giessen and Marburg, Germany

Mohamad Bakhaidar, Medical College of Wisconsin, United States

Updates

Copyright

*Correspondence: Sahin Hanalioglu

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics