Adaptive Physics-Based Non-Rigid Registration for Immersive Image-Guided Neuronavigation Systems

Objective: In image-guided neurosurgery, co-registered preoperative anatomical, functional, and diffusion tensor imaging can be used to facilitate a safe resection of brain tumors in eloquent areas of the brain. However, the brain deforms during surgery, particularly in the presence of tumor resection. Non-Rigid Registration (NRR) of the preoperative image data can be used to create a registered image that captures the deformation in the intraoperative image while maintaining the quality of the preoperative image. Using clinical data, this paper reports the results of a comparison of the accuracy and performance among several non-rigid registration methods for handling brain deformation. A new adaptive method that automatically removes mesh elements in the area of the resected tumor, thereby handling deformation in the presence of resection is presented. To improve the user experience, we also present a new way of using mixed reality with ultrasound, MRI, and CT. Materials and methods: This study focuses on 30 glioma surgeries performed at two different hospitals, many of which involved the resection of significant tumor volumes. An Adaptive Physics-Based Non-Rigid Registration method (A-PBNRR) registers preoperative and intraoperative MRI for each patient. The results are compared with three other readily available registration methods: a rigid registration implemented in 3D Slicer v4.4.0; a B-Spline non-rigid registration implemented in 3D Slicer v4.4.0; and PBNRR implemented in ITKv4.7.0, upon which A-PBNRR was based. Three measures were employed to facilitate a comprehensive evaluation of the registration accuracy: (i) visual assessment, (ii) a Hausdorff Distance-based metric, and (iii) a landmark-based approach using anatomical points identified by a neurosurgeon. Results: The A-PBNRR using multi-tissue mesh adaptation improved the accuracy of deformable registration by more than five times compared to rigid and traditional physics based non-rigid registration, and four times compared to B-Spline interpolation methods which are part of ITK and 3D Slicer. Performance analysis showed that A-PBNRR could be applied, on average, in <2 min, achieving desirable speed for use in a clinical setting. Conclusions: The A-PBNRR method performed significantly better than other readily available registration methods at modeling deformation in the presence of resection. Both the registration accuracy and performance proved sufficient to be of clinical value in the operating room. A-PBNRR, coupled with the mixed reality system, presents a powerful and affordable solution compared to current neuronavigation systems.


INTRODUCTION
Malignant gliomas are the most common primary and metastatic brain tumors, accounting for ∼70% of the 22,500 new cases of primary brain tumors diagnosed annually in adults in the United States (1,2). Treatment typically includes surgical removal followed by radiotherapy or chemotherapy. Tumor removal provides a tissue diagnosis, relieves mass effect, and intracranial pressure that may be causing pain or other neurological symptoms, and improves prognosis. However, gross total resection is difficult to achieve because of the infiltrative nature of gliomas and because brain tumors are often embedded in critical functional brain tissue. Oncologic outcomes clearly depend on the extent of tumor resection, yet functional preservation is critical for quality of life and survival, so tumor surgery is a delicate balance between removing as much tumor as possible and preserving important functional areas of the brain (3)(4)(5).
During the past two decades, developments in image-guided therapy (6) have allowed surgeons to use preoperative imaging and neuronavigation to facilitate a maximally safe resection of gliomas in eloquent areas of the brain. Preoperative anatomical Magnetic Resonance Imaging (MRI) can be combined with functional MRI (fMRI) to map out areas of the brain near the tumor that are involved with important function such as vision, speech and language, or motor control (7)(8)(9)(10)(11)(12). Diffusion Tensor Imaging (DTI) can be used to map out white matter tracts that connect to these important regions and run near or through the tumor (13)(14)(15)(16)(17)(18)(19).
Tracking the position of medical tools in patient's brain during surgery is possible with neuronavigation using registration of preoperative image data to patient coordinates. The surgeon can then view the location of tools relative to the preoperative anatomical and functional image data, thereby avoiding damage to eloquent areas during tumor resection (20)(21)(22)(23)(24)(25)(26)(27)(28). Commercial neuronavigation systems (e.g., Stealth by Medtronic and VectorVision by BrainLAB) generally use a rigid transformation to map preoperative image data to patient coordinates. Rigid registration is sufficient when mapping between rigid objects (e.g., between the skull in the preoperative image data and the patient's skull in the operating room). During surgery, however, the brain deforms due to several factors such as cerebrospinal fluid leakage, intra-cranial pressure, gravity, the administration of osmotic diuretics, and the procedure itself (e.g., tumor retraction FIGURE 1 | Discrepancies between preoperative and intraoperative MR Imaging before and during neurosurgery. Left: preoperative MRI; Right: intraoperative MRI acquired after a part of the tumor is removed. The yellow outline indicates the preoperative brain outline after a rigid rotation. The large dark cavity is the tumor resection. and resection) (17,29,30). A rigid transformation will not accurately map preoperative image data to the patient's brain during surgery, particularly as the resection proceeds, with the greatest uncertainty at the most critical portions of the surgery (Figure 1).
The adoption of intraoperative MRI (iMRI) has provided a means for monitoring brain deformation during surgery (31). The number of hospitals offering iMRI has grown during the past decade from a handful of research centers to a couple of hundreds clinical sites across the world (32). Currently, access to iMRI is limited by high costs, personnel requirements, and disruption of the operative workflow. Several researchers are investigating methods that combine other imaging modalities, such as 3D ultrasound (33)(34)(35)(36) and surface imaging combined with deformation modeling (37)(38)(39), to compensate for brain deformation. However, iMRI remains the gold standard for measuring intraoperative brain deformation and for monitoring tumor resection.
Given an intraoperative anatomical MRI image registered to patient coordinates, preoperative fMRI and DTI image data can be mapped to the intraoperative image and then to patient coordinates to provide updated guidance for the surgeon. This mapping is usually performed by first registering the fMRI and DTI images to the preoperative anatomical MRI using a rigid transformation, and then registering the preoperative anatomical MRI to the intraoperative MRI image, which is pre-registered to patient coordinates. Because of brain deformation during surgery, registering from preoperative to intraoperative MRI requires a non-rigid registration. DTI and fMRI are deformed the same way as with PBNRR (20).
There are several approaches for estimating the non-rigid registration between two or more images, as outlined in prior research (40)(41)(42)(43)(44). Ranging from control-point registration with spline interpolation to mass-spring models using displacements of anatomical landmarks as force vectors, to physics-based finite element models, many techniques have been applied to non-rigid registration between brain data sets, both for brain mapping (45)(46)(47) and for modeling brain shift (20,21,24,25,(37)(38)(39)(48)(49)(50)(51)(52)(53)(54). However, most of these methods were not designed to model tissue retraction or resection. While Physics-Based Non-Rigid Registration (PBNRR) has been shown to accurately capture brain shift (i.e., volumetric deformations of the brain) during image-guided neurosurgery (21), it fails to accurately fuse iMRI with pre-operative MRI for cases with tumor resection. In this paper, we evaluate the accuracy and performance of the Adaptive Physics-Based Non-Rigid Registration (A-PBNRR) method for modeling brain deformation in the presence of tumor resection. The results of a study on 30 glioma cases from two different hospitals are presented, many of which involved a meaningful tumor volume resection, defined as an Extent Of Resection (EOR) ≥70% (55). The majority of the glioma resections also included an EOR ≥78-80%, which is considered a significant predictor of Overall Survival (OS) (56,57). The results of the A-PBNRR are qualitatively and quantitatively compared with the results of three other open-source registration methods that are readily available to researchers and clinicians (58,59). Acceptance in clinical practice requires that non-rigid registration be completed in the time constraints imposed by neurosurgery (e.g., 2-3 min) and without the cost of high-performance computing clusters (20). Thus, the processing speeds are compared on a readily available 12-core desktop system.

Immersion
The use of NRR within immersive environments together with supporting technologies such as machine learning for reducing the parameter search space can potentially offer a more affordable alternative to highly expensive commercial neuronavigation systems. In the context of immersive environments, an application can be created to enhance the user experience with ultrasound. Ultrasound is a widely available, easy-to-use, and less expensive imaging device than MRI. A main benefit is that it does not use any ionizing radiation and is safe. Moreover, ultrasound can provide real-time imaging, making it a good tool for guidance in minimally invasive procedures. An inherent deficiency of ultrasound, however, is the separation between the image and the transducer probe. The probe is always close to the patient, but the image is shown on a screen several meters away from the patient. Surgeons have to figure out the position of the image relative to the patient anatomy. To deal with this deficiency, HoloLens can be introduced to display the image on the plane defined by the probe. Doing so enhances ultrasound to mixed reality ultrasound. This plane can also be used as a cutting plane to show a pre-operative image slice such as CT or MRI.
Users can switch between the preoperative image slice and the ultrasound image to see what happens on this plane before and after surgery.
The HoloLens is an optical see through (OST) device that uses holographic technology to blend the physical world with a virtual space and allow people to interact with virtual holograms. Because HoloLens is an OST device, users can see the real world directly through the display. The framework enabling the OST Image-Guided Neurosurgery System (IGNS) includes two parts: a part similar to traditional IGNS with a data server, and a HoloLens with a data receiver. These parts are shown in Figure 2. The data server is used to collect intraoperative data including the intraoperative image and NDI tracking data. These are then sent to the HoloLens via a Wi-Fi connection. The second part includes a HoloLens data receiver that receives data from the data server and then relays it to the HoloLens for rendering. The rendering is used to generate left and right eye images of the virtual object, which can be an ultrasound (US) image or DTI fibers etc. The parallax between these two images will be used by the human brain to produce a stereo vision. To correctly render these two images the virtual object must be correctly aligned with the real world, which is usually performed by a spatial registration technique.
Spatial registration is a technique to find the transform between different coordinate spaces. There are multiple coordinate spaces in the operating room (OR) as shown in Figure 2. The patient is defined in the coordinate space of the NDI reference tool. During a routine image-guided surgery, a registration procedure is performed to register patient space with the image space such as the pre-operative MRI space. To track the spatial position of a US image an NDI tacking tool is fixed on the NDI probe and a calibration procedure is performed beforehand. Through the NDI tracker, the preoperative MRI scan, intra-operative US scan and the surgical tool can be placed into one reference space. The virtual objects defined in the reference space need to be converted into the HoloLens space to be rendered. A HD color camera equipped with the HoloLens can be used to do the conversion. Since the NDI optical tracker tracks objects based on reflected infrared rays and the HoloLens HD camera tracks objects using computer vision, we would need a device that can track objects using both reflection of infrared rays and computer vision. A simpler way is to use spheres of the NDI reference tool because these spheres can be easily detected using blob detection. Since the spheres can be tracked by both the NDI tracker and the HoloLens camera, the spatial transform between the reference space and the HoloLens space can be found. After the virtual objects are transformed into the HoloLens space, rendering can be performed to get stereo vision.

Patient Population and Imaging Protocols
This study includes 30 patients from two hospitals: Frontiers in Digital Health | www.frontiersin.org FIGURE 2 | Overview of the OST IGNS framework. The left part is similar to traditional IGNS with the inclusion of a data server. The right part is the OST portion which includes a data receiver, a spatial registration method for conversion of objects to HoloLens space, and HoloLens rendering. The data server and data receiver communicate wirelessly.

Brigham and Women's Hospital
Ten patients (six male, four female) with an age range of 28-62 years and a mean age of 45.2 years, underwent surgery for supratentorial gliomas between April 2005 and January 2006 in Brigham and Women's Magnetic Resonance Therapy (MRT) facility, which was typically used to resect intrinsic brain tumors. In some cases, the lesions were in or adjacent to eloquent brain areas, including the precentral gyrus and corticospinal tract for motor function, as well as Broca's and Wernicke's areas for language function. In general, these were patients undergoing surgery for brain tumors in the intraoperative MRI. Inclusion criteria includes the presence of an intracranial tumor. Exclusion criteria includes contraindication to MRI.  A neurosurgeon estimated the volume of resected tumor for each patient by performing a volumetric analysis on the preoperative and intraoperative MRI. Based on this volumetric analysis, the data were categorized as: (i) brain shift (with no resection), (ii) partial resection, (iii) total resection, and (iv) supra total resection. Table 1 summarizes the clinical data. The data collections were carried out with Institutional Review Board (IRB) approval from both hospitals. The protocol is detailed in the paper by Yao et al. (60) and Archip et al. (20), the first times the data are used.

Segmentation
Non-rigid registration is performed using a patient-specific brain model derived by segmenting the preoperative anatomical image into brain, tumor, and non-brain regions (20). Segmentation performance is not critical because preoperative imaging is typically performed a couple of days before surgery. Segmentation was performed with a combination of manual and automatic tools. First, the skull and outer tissues were removed using the open-source Brain Extraction Tool (BET) (47). Further segmentation of the brain surface was performed using a combination of automatic operators implemented in 3D Slicer software (i.e., region growing and level-set filters) (61) and a slice-by-slice manual segmentation correction. An evaluation on how segmentation accuracy affects registration accuracy is beyond the scope of this paper but will be included in future work.

Mesh Generation
The segmentation is used to generate a patient-specific finite element mesh for physics-based non-rigid registration methods. The shape of the elements is critical for the accuracy and the convergence of a finite element solution. For example, elements with large dihedral angles tend to increase the discretization error in the solution (62). On the other hand, elements with small dihedral angles are bad for matrix conditioning but not for interpolation or discretization (63,64). A parallel Delaunay meshing method is employed to tessellate the segmented brain with high quality tetrahedral elements and to model the brain surface with geometric and topological guarantees (65). Both single-tissue (i.e., brain parenchyma) and multi-tissue (i.e., brain parenchyma and tumor) meshes are generated. Figure 3 depicts one of the multi-tissue meshes. Parameter δ ( Table 3) determines the size of the mesh, where a smaller δ >0 generates a larger mesh.

Rigid Registration
For the purpose of this study, patients first underwent an intraoperative scan after their head was positioned and fixed for the craniotomy but before the skull was opened. As a standard procedure iMRI is performed at the neurosurgeon's request, after dural opening, during or after a significant tumor volume resection or when decided appropriate by the surgeon (66,67). Assuming minimal brain shift at this point, an initial rigid registration was performed to estimate a rigid transformation from the preoperative to intraoperative image data. This rigid transformation was used to initialize non-rigid registration methods.
The rigid registration was performed using the BRAINSFit module integrated in 3D Slicer v4.4.0 (58). BRAINSFit is a general registration module widely used by the research community. BRAINSFit's rigid registration relies on histogram bins and spatial samples to estimate a Mattes Mutual Information cost metric ( Table 2). The larger the number of spatial samples, the slower and more precise the fit. The default values for the number of histogram levels and sampling percentage is 50 and 0.2%, respectively. Hundred histogram levels and a 5% sampling percentage were selected to achieve higher accuracy ( Table 2).
The default values were used for rest of the parameters (optimizer type, max number of iterations, min step length, and grid size) to ensure the stability of the registration. More information about the parameters of BRAINSFit is available in 3D Slicer.

Non-rigid Registration
As surgery proceeds, the initial rigid preoperative to intraoperative registration becomes increasingly less valid.
To make the best use of preoperative image data for surgical guidance (including the use of fMRI and DTI to inform the surgeon of critical structures near the tumor), the preoperative image must be updated accordingly using non-rigid registration. Recent efforts have aimed to model intraoperative brain deformation due to tissue retraction and tumor resection. Miga et al. (68) introduced a method for modeling retraction and resection using a multi-step procedure, which allows arbitrary placement and movement of a retractor and removal of tissue. Tissue resection is modeled by manually deleting model elements identified as tumor in the preoperative image. Risholm et al. (69) proposed a registration framework based on the bijective Demons algorithm which can handle retraction and resection. Retraction is detected at areas of the deformation field with high internal strain and the estimated retraction boundary is integrated as a diffusion boundary in an anisotropic smoother. Resection is detected by a level set method evolving in the space where image intensities disagree. Ferrant et al. (54) tracked brain deformation due to tumor resection over multiple intraoperative MRI acquisitions. After each scan, the brain surface is segmented, and a surface-matching algorithm is used to drive the deformation of a finite element model of the brain. Vigneron et al. (70) modeled retraction by segmenting the brain surface from two sequential intraoperative MRI image volumes and identifying landmarks on these surfaces. Displacements of the landmarks between the two surfaces are used to drive deformation using a finite element modeling technique that allows discontinuities at the resection boundary. We recently FIGURE 3 | A multi-tissue (brain parenchyma, tumor) finite element mesh used for non-rigid registration (number of tetrahedral elements: 160,179; minimum dihedral angle: 4.41 • ). Top row: the mesh superimposed on a volume rendering of the MRI data. Cyan and red represent the brain parenchyma and tumor meshes, respectively. Bottom row: mesh fidelity illustrated on an axial, sagittal, and coronal slices. Each slice depicts a 2D cross-section of the mesh surface (cyan and red lines) and the segmented volume (green and yellow regions). The closer the mesh surface is to the segmented boundaries, the higher the mesh fidelity.  (24,59). This A-PBNRR was specifically developed to model tumor resection in intraoperative images without manual intervention. Unlike other methods (54,70) it uses image-based registration and thereby does not require segmentation of the intraoperative MRI image, which is timeconsuming and may require manual intervention. The algorithm is parallelized to ensure that it is fast enough to be used in a clinical setting. As a standard for comparison, both the rigid and the nonrigid registration methods were implemented in the open-source BRAINSFit module of a 3D Slicer. The non-rigid registration method is based on a B-spline interpolation scheme, which uses a 3-dimensional cubic control grid to optimize the registration (58). Table 2 lists the parameters for the B-Spline deformable registration method. To facilitate performance comparisons, all three non-rigid registration methods are parallelized for shared memory multiprocessor architectures.

Adaptive Physics-Based Non-rigid Registration
A-PBNRR (22) augments PBNRR (59) to accommodate softtissue deformation caused by tumor resection. PBNRR has been shown to accurately capture brain shift (i.e., volumetric deformations of the brain) during image-guided neurosurgery (21). PBNRR uses the finite element method (FEM) to model deformations and estimates a sparse displacement vector associated with selected features located in the cranial cavity of the preoperative image. The sparse vector is used (as boundary conditions) to drive the deformation of a patient-specific, singletissue (i.e., brain parenchyma), finite element mesh. A-PBNRR adds an iterative method which adaptively modifies a heterogeneous finite element model to optimize non-rigid registration in the presence of tissue resection. Using the segmented tumor and the registration error at each iteration, A-PBNRR gradually excludes the resection volume from the model. During each iteration, registration is performed, the registration error is estimated, the mesh is deformed to a predicted resection volume, and the brain model (minus the predicted resection volume) is re-tessellated. Re-tessellation is required to ensure high quality mesh elements.
The major improvements of A-PBNRR over PBNRR are: • Adaptivity. An adaptive, iterative process allows A-PBNRR to gradually change the preoperative model geometry to accommodate resection. • Heterogeneity. Whereas, PBNRR can only be applied to a homogenous (single-tissue) brain model, A-PBNRR can accommodate a heterogeneous (multi-tissue) model. Twotissue models (brain parenchyma and tumor) were used in this study, but the method can accommodate any number of tissues. Figure 3 depicts a heterogeneous brain model, and Table 3 lists the mechanical tissue properties used in this study. • Higher Parallelization. A-PBNRR uses a parallel framework that can target shared memory multi-core machines. A previous study (22) showed that A-PBNRR exploits additional parallelism over PBNRR with corresponding performance improvements so that, even with multiple iterations, A-PBNRR requires on average <2 min to perform non-rigid registration.

OPTIMIZATIONS
A-PBNRR is a computationally intensive algorithm that must be able to execute during a time-critical IGNS operation. Two ways were explored to improve accuracy and performance: (1) equidistribution of registration points using adaptive refinement for improved accuracy and (2) deep learning for parameter search space reduction for improved accuracy and performance.

Adaptive Refinement for the Optimal Distribution of Registration Points
As noted in section Mesh Generation, the presented pipeline utilizes a Delaunay-based image-to-mesh conversion tool for mesh generation. This approach can generate a mesh that faithfully captures (with geometric guarantees) the surface of the input image and the interface between the two tissues. However, it does not consider any information about the registration points recovered by the Block Matching step. Examples of selected blocks are shown in Figure 4. In previous work (73), the distribution of landmarks over the mesh was incorporated into the mesh generation module using custom sizing functions for two different mesh generation methods (Delaunay refinement and Advancing Front). The goal of these modifications is to equidistribute the landmarks among the mesh elements which is expected to improve the registration error. The evaluation presented in Fedorov and Chrisochoides (73) was based on synthetic deformation fields and showed that indeed these modifications reduce the registration error. In this work, the same sizing function is applied in order to validate the effectiveness of the method. Moreover, preliminary results on applying mesh adaptation methods that originate from the Computational Fluid Dynamics field (74) are presented.
For completeness, a summary of the method employed in Fedorov and Chrisochoides (73) is presented along with the modifications that can turn it into an anisotropic metric-based method. The equidistribution of the registration points can be formulated as assigning the same number of registration points FIGURE 4 | Selected blocks from an MRI volume using various connectivity patterns. Blocks are depicted on 10 consecutive sagittal slices. From top to bottom row: sagittal slice (left) and volumetric MRI rendering (right); selected blocks with a "vertex" pattern; selected blocks with an "edge" pattern; selected blocks with a "face" pattern. Number of selected blocks for all patterns: 322,060.
at each mesh vertex cell complex, where a mesh vertex cell complex is defined as the set of all the elements attached to a vertex. The crux of the method is to set the local spacing at each vertex equal to the distance to the k-th closest registration point. Assuming an ideal spacing the mesh vertex cell complex of each vertex will contain k registration points. An illustration for k = 5 in given in Figure 5. Notice that another way to interpret the sizing constraint at each vertex is by a sphere centered at each mesh vertex with a radius equal to the distance to the k-th registration point.
The non-optimized A-PBNRR creates adaptive meshes but it does not capture the local density of the landmarks efficiently due to the fact that only the k-th point is used and the relative locations of the rest k-1 landmarks is ignored. Building upon the observation that the previous method can be seen as placing spheres at each vertex, one can evaluate the smallest bounding ellipsoid that contains the k closest registration points and is centered at the given vertex. Describing the local spacing as an ellipsoid gives the ability to capture the local distribution of the landmarks better thanks to the increased degrees of freedom of an ellipsoid is comparison to a sphere. Creating the minimum volume ellipsoid that encloses a given pointset is a problem well-studied in the optimization literature (75). The constructed ellipsoid has a natural mapping to a 3 × 3 positive definite matrix (76) that can be used as a metric that guides the anisotropic mesh adaptation procedure. In order to give to the mesh adaptation procedure more flexibility an additional "inflation" constant a is introduced that is common for all the points and allows to enlarge all ellipsoids by a constant factor. The goal of this parameter is allow the mesh generation procedure to perform operations that may not conform to the strict size but improve the overall result. See Figure 5.
In order to incorporate this approach to A-PBNRR, the mesh generated by the Parallel Optimistic Delaunay Mesher (PODM) method (65) at each iteration, along with the landmarks identified by the Block-Matching step, are used to build a metric field. The metric field is constructed by iterating in parallel the mesh vertices and evaluating the k-closest registration points using a k-nn search from the VTK library (77). The minimum volume bounding ellipsoid is constructed using the Khachiyan algorithm (78). Finally, the mesh is adapted using MMG3D (79).

Deep Learning for Parameter Search Space Reduction
A-PBNRR utilizes many different parameters that drive its results. Every patient's brain is different, and time is a critical resource in IGNS operations. As a result, the issue of determining input parameters to achieve a registration as optimal and as quickly as possible while also accounting for patient-specific details is an open problem. A-PBNRR has many input parameters (see Table 3), and the cost for an exhaustive parameter search is prohibitively expensive. For example, for an average case presented in this paper, it takes more than 10 days using a cluster of 400 cores running 24/7 to find sub-optimal parameter values. To address this problem, we have developed a deep feedforward neural network that can predict sets of optimal or suboptimal input parameters that yield a low Hausdorff distance of the registered image from the preoperative image. The deep learning system learns the correlation of the different input parameters, some of which are physical parameters, and how they contribute to a low Hausdorff distance.
The neural network takes as input 14 parameters: 12 A-PBNRR input parameters and two additional patient-specific parameters. The output of the neural network is a single value: the predicted Hausdorff distance of the registered image from the preoperative image if these parameters were to be used as input to A-PBNRR. The two patient-specific parameters are: (1) the location of the tumor in the brain (lobe-wise), each position represented by a numerical value and (2) the degree of brain deformation caused by the tumor, which can be directly inferred from the rigid registration error. These two parameters are necessary to increase the patientspecificity of the model, as a general model does not properly consider differences in the brains of different patients. In our experiments, using the patient-specific parameters yielded significantly better results than simply using the A-PBNRR parameters. As for the architecture, the neural network was implemented using Keras, on a TensorFlow backend. It consists of four hidden fully connected layers, each composed of 128 neurons. We used ReLU as the activation function and stochastic gradient descent (SGD) with Nesterov momentum for optimization. The architecture was determined via grid search. The deep learning model was trained on output data from over 2.5 million executions of A-PBNRR, spanning 12 different patient cases, including partial, complete, and extreme tumor resections. Out of the 12 cases, 10 were used for training (∼2.3 million parameter sets), and two for evaluation (∼200,000 parameter sets). The cases used for evaluation are partial tumor resection and complete tumor resection data. We have four classes of data: brain shift, partial, complete, and supra-total tumor resections. The two classes in the middle were used for evaluation since they represent the most-frequently occurring cases. The training and evaluation datasets are mutually exclusive. The deep learning model is used before the execution of A-PBNRR. The software utilizes as input the parameter sets predicted by the deep learning model to result in the lowest Hausdorff distances. The neural network is given as input each parameter set in a pool consisting of patient-specific parameter sets. This was produced by augmenting a base, general parameter set pool by including the two patient-specific parameters. The neural network iterates through each parameter set and outputs the Hausdorff distance of the registered image that would be produced by A-PBNRR if this parameter set were utilized. The lowest of these predictions are compiled in a file and can be used as input to A-PBNRR.
Choosing good parameters for medical image registration is a difficult task, as there are many possible values and combinations. The deep learning portion of the A-PBNRR framework makes this easier by greatly limiting the set of possible optimal parameters for each individual patient, bringing A-PBNRR one-step closer to being utilized in real-world, time critical IGNS operations.

RESULTS
An evaluation of four registration methods (including the proposed method) is performed using imaging data from thirty patients who underwent partial, total, and supra total glioma resection. For a more comprehensive evaluation, the accuracy was assessed using both qualitative (visual inspection), and quantitative criteria (Hausdorff Distance-based error metric, and a landmark-based error measured by Dr. Chengjun Yao). The four registration methods were:

Visual Assessment
In most applications, careful visual inspection remains the primary validation check due to its simplicity and speed. In this study, a visual inspection of the full registered volumes was performed by a neurosurgeon. The neurosurgeon inspects the brain morphology, relevant landmarks and eloquent areas of the brain, the brain shift, the margins of the tumor and the deformation after the resection. The inspection was performed after subtracting the registered preoperative MRI from the intraoperative MRI. The smaller the differences after the subtraction the more precise the alignment. Figure 6 presents the registration results for 13 tumor resection cases (three partial, seven total, and three supra total resections). These cases are representative due to the different locations of the tumor resection. For each patient, Figure 6 shows a 2D section from the  intraoperative MRI, the corresponding registered preoperative MRI, and the subtraction of the registered preoperative MRI from the intraoperative MRI. Smaller differences indicate a more precise alignment. As Figure 6 illustrates, A-PBNRR provides the most accurate alignment and preserves brain morphology in the presence of resection, specifically near tumor margins. In contrast, the other registration methods fail to capture the complex soft-tissue deformation near the tumor resection.

Quantitative Assessment Using Hausdorff Distance (HD)
An objective and automatic method (80) was employed to quantitatively evaluate the registration accuracy. This method was preferred because it is fast and does not require a manual intervention. It relies on Canny edge detection (81) to compute two-point sets. The first point set is computed from the preoperative volume ( Figure 7A) and then transformed (using the deformation field computed by each registration method) from the preoperative to the intraoperative space. Figure 7B depicts a transformed point set. The second point set is computed from the intraoperative volume ( Figure 7C). A Hausdorff Distance (HD) metric (82) is employed to calculate the degree of displacement between the two-point sets. Table 4 presents the results of this quantitative evaluation. A smaller HD value indicates better registration (HD ≥ 0), so that perfect registration would have an HD of 0. Table 5 presents the minimum, maximum, and mean errors for each case.
The ratio = HD X /HD A−PBNRR indicates the degree to which the error of the A-PBNRR is lower than the error of the X method, where X ǫ {RR, B-Spline, PBNRR}. A-PBNRR achieved the smallest error in each individual case and the smallest average error (3.63 mm) among all four methods. In 30 test cases, A-PBNRR is 5.47, 4.34, and 5.06 times more accurate in the presence of resection than RR, B-Spline, and PBNRR, respectively. Note that this study utilized a 100% HD metric unlike our previous work (20), which featured a 95% HD metric. Figure 8A plots the HD error of data in Table 4. Tables 4, 5 suggest that independently of the evaluation method the A-PBNRR outperforms all three registration methods used in this evaluation. These quantitative results are consistent with the quality data presented in Figure 7.

Quantitative Assessment via Anatomical Landmarks
Registration accuracy was quantitatively evaluated using anatomical landmarks selected by a neurosurgeon, as suggested in Hastreiter et al. (83) (Figure 9). The neurosurgeon located six landmarks in each registered preoperative image volume and corresponding intraoperative image volume. Landmarks A and B were selected individually in the cortex near the tumor; C and D were selected at the anterior horn and the triangular part of the lateral ventricle, respectively; E and F were selected at the junction between the pons and mid-brain and at the roof of the fourth ventricle, respectively. Between one and four additional landmarks of functional interest were located on an individual basis by the neurosurgeon. For each case, these additional landmarks were selected depending on the location of the tumor, the surgical approach, and the visibility of the preoperative and intraoperative images. These structures of functional interest include, amongst others, the primary motor cortex, the pyramidal tract, the Sylvian fissure, the lateral border of the thalamus, the basal ganglia, the posterior limb of the internal capsule and major vessels. For each landmark, the error was calculated as the distance between the landmark location in the registered preoperative image and the corresponding intraoperative image. Table 5 presents the minimum, maximum, and mean errors for each case. The assessment confirms that the A-PBNRR provides the most accurate registration, with an average minimum error of 1.03 mm and average mean error of 3.22 mm.

Performance
All methods were parallelized for shared memory multiprocessor architectures. Figure 8B presents the end-to-end execution time for the registration of preoperative to intraoperative images for all 30 cases. Rigid registration, B-Spline, PBNRR, and A-PBNRR required on average 0.84, 8.98, 0.83, and 1.42 min, respectively (including I/O). Note that the B-Spline method is the most computationally intensive, requiring more than 8 min in 17 out of 30 cases. A different set of B-Spline parameters, such as a smaller sampling percentage, a smaller number of histogram bins, or a coarser grid could potentially improve performance at the cost of accuracy. Although A-PBNRR is slower than rigid registration and PBNRR, it has significantly better accuracy in the presence of resection than the other methods, and it is fast enough to satisfy the constraints of image-guided neurosurgery, where registration times of <2-3 min are desired.

Adaptive Refinement for the Optimal Distribution of Registration Points
The results of augmenting mesh adaptation to the A-PBNRR method are presented in Table 6. The number of registration points per mesh cell vertex are set to k = 500. This value was selected since is produces meshes with a vertex count close to the baseline meshes. The first line of the table corresponds to the base case of using A-PBNRR with no adaptation. "iso" indicates the application of isotropic adaptation as described in (73). Isotropic adaptation reduces the Hausdorff distance by almost 13% for this case but increases the error at the landmarks identified by the neurosurgeon. Moreover, it comes with the price of almost twice the size of the mesh. The rest of the rows correspond to applying anisotropic mesh adaptation as described in section Deep Learning for Parameter  Search Space Reduction. We also provide a free parameter "a" which corresponds to 'inflating' the generated ellipsoids by a constant amount. The goal of this parameter is to allow the mesh generation procedure to perform operations that may not conform to the strict size but improve the overall result. For a = 1.0 (that is no inflation) the Hausdorff distance is marginally lower and only the minimum landmark-based error is lower. Increasing the inflation parameter to 1.2 and 1.5 one can see an improvement in the minimum and mean landmark-based error and at the same time a reduction in mesh size. Although these results are preliminary (a complete study will be completed in the future), they indicate that the problem of generating an 'ideal' mesh for image registration purposes includes competing evaluation criteria like the minimum mesh size, Hausdorff distance and the landmark-based error above. Introducing mesh adaptation to A-PBNRR has the potential to improve its effectiveness but further investigation is needed in order to optimize its parameters which makes it a good candidate for the deep learning methods presented in section.

Deep Learning for Parameter Search Space Reduction
Using the deep learning model, we achieved a training root mean squared error (RMSE) of 1.41 and an evaluation RMSE of 1.21 for predicted Hausdorff distances. On average, based on Table 7, A-PBNRR with deep learning is ∼8.45 times better than rigid registration, ∼6.71 times better than B-Spline registration, and ∼7.9 times better than PBNRR. It should also be noted that A-PBNRR works very well with deep tumors, which result in great brain deformation, in comparison to the other registration methods, leading to results that are on average ∼16.8 times better. Overall, A-PBNRR with deep learning leads to more accurate results than any of the other registration methods.

DISCUSSION AND FUTURE WORK
Recent advances in neuroimaging such as fMRI and DTI allow neurosurgeons to plan tumor resections that minimize damage to eloquent cortical regions and white matter tracts (11,13,(84)(85)(86). A number of commercial systems (e.g.,  Brainlab Curve Image-Guided Therapy system) can register fMRI and DTI to preoperative anatomical MRI images and then map this data to the patient intraoperatively using rigid registration. It has been shown, however, that there can be significant deformation of the brain during surgery, especially in the presence of tumor resection, making rigid registration insufficient and surgical plans made with preoperative data invalid (87). Intraoperative MRI can be used to observe the deformed brain during surgery. While it is impractical to acquire fMRI and DTI intraoperatively, the preoperative MRI image can be registered to an intraoperative MRI image using nonrigid registration. The resultant registration can then be applied to the preoperative fMRI, DTI, and the surgical plan, providing more accurate, updated guidance to the neurosurgeon (20).
Non-rigid registration algorithms remain computationally expensive and have proven to be impractical for use in clinical settings in the past. However, this study indicates that parallel/distributed computing and deep learning can provide faster and more effective registration for image-guided neurosurgery (20,88), which is critical for immersive solutions.
The experimental results confirm that, of the four methods, A-PBNRR provides the most accurate registration, with an average error of 3.2-3.6 mm for landmark-based and Hausdorff Distance-based metrics, respectively, for anisotropic image spacing (e.g., 0.488 × 0.488 × 1.0 mm 3 , Table 1). The extent of the resection (partial, total, or supra total) does not significantly affect this accuracy (Figures 7, 8; Table 4). Performance analysis shows that A-PBNRR is sufficiently fast to be useful in a clinical setting.
As part of future work, improvements on the modeling of major substructures of the brain, such as the ventricles shall be made. As of now, the brain is meshed as one tissue. However, this can sometimes lead to misstructured tissues, such as in the case of the ventricles which might appear twisted. This will be solved by using multi-tissue mesh generation, where the ventricles and the rest of the brain are meshed as independent structures and later combined. Regarding the machine learning, Hausdorff distance results were better than the results using the default parameters. However, the assessment of anatomical landmarks from the neurosurgeons showed little improvement of the machine learning over the default parameters. Therefore, further work is needed.
This paper has shown, on a study of 30 patients, that we can map preoperative image data to the patient with an average error of a few millimeters and with computation times that are acceptable in a clinical environment. Future efforts will continue this focus on improving registration accuracy and decreasing computation times, using Cloud computing and Machine Learning. Decreasing computation times will require exploring ways to improve parallelization of registration methods. Improving accuracy will require an investigation into higher order finite element modeling to study the impact on accuracy and performance. Additionally, the effect of segmentation and model construction on the registration accuracy remains a potential area of future study. Specifically, an investigation into the effect of higher quality segmentation of the brain surface and structures that constrain brain deformation, such as the skull cavity, the falx cerebri, and the tentorium cerebelli, would reveal the impact of incorporating more tissues, including blood vessels and the ventricles, into the brain model. Finally, because intraoperative MRI is not available in many neurosurgical suites, further investigation incorporating the use of intraoperative ultrasound to track brain deformation during surgery would extend these registration methods to map preoperative image data to intraoperative ultrasound.
In summary, although the paper presents promising results, there are two limitations. First, an accuracy of <2 mm needs to be consistently achieved. Second, these results are realized with intra-operative MRIs, which are expensive and thus not widely used. As an alternative, intra-operative ultrasound can be used. However, to achieve high accuracy with ultrasound, a much harder problem, a more computationally friendly modality (i.e., intra-operative MRI) has to first be addressed.

CONCLUSION
This study compared four methods for registering preoperative image data to intraoperative MRI images in the presence of significant brain deformation during glioma resection in 30 patients. The Adaptive Physics-Based Non-Rigid Registration method developed in this study proved to be significantly better than other methods at modeling deformation in the presence of resection. Both the registration accuracy and performance were found to be of clinical value in the operating room, and the combination of A-PBNRR with deep learning and mixed reality can offer a compelling solution for IGNS.

DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

AUTHOR CONTRIBUTIONS
FD and NC developed the Adaptive PBNRR (A-PBNRR) method, which is implemented by FD. AA, CT, and AF (while he was at CRTC) with NC developed the Machine Learning and mesh refinement approaches to improve the accuracy of the approximation method for deformable registration. The immersive aspects of the paper and the use of NRR within mixed reality IGNS environments are done by YL and NC. The collection of data and evaluation of the results and methods are taken care of by CY, KK, SF, NF, AG, and RK. All authors contributed to the article and approved the submitted version.