Skip to main content

ORIGINAL RESEARCH article

Front. Oncol., 02 February 2021
Sec. Neuro-Oncology and Neurosurgical Oncology
This article is part of the Research Topic Intraoperative Ultrasound in Brain Tumor Surgery: State-Of-The-Art and Future Perspectives View all 11 articles

Comparison of Intraoperative Ultrasound B-Mode and Strain Elastography for the Differentiation of Glioblastomas From Solitary Brain Metastases. An Automated Deep Learning Approach for Image Analysis

Santiago Cepeda*Santiago Cepeda1*Sergio García-GarcíaSergio García-García1Ignacio ArreseIgnacio Arrese1Gabriel Fernndez-PrezGabriel Fernández-Pérez2María Velasco-CasaresMaría Velasco-Casares2Manuel Fajardo-PuentesManuel Fajardo-Puentes2Toms ZamoraTomás Zamora3Rosario SarabiaRosario Sarabia1
  • 1Neurosurgery Department, University Hospital Río Hortega, Valladolid, Spain
  • 2Radiology Department, University Hospital Río Hortega, Valladolid, Spain
  • 3Pathology Department, University Hospital Río Hortega, Valladolid, Spain

Background: The differential diagnosis of glioblastomas (GBM) from solitary brain metastases (SBM) is essential because the surgical strategy varies according to the histopathological diagnosis. Intraoperative ultrasound elastography (IOUS-E) is a relatively novel technique implemented in the surgical management of brain tumors that provides additional information about the elasticity of tissues. This study compares the discriminative capacity of intraoperative ultrasound B-mode and strain elastography to differentiate GBM from SBM.

Methods: We performed a retrospective analysis of patients who underwent craniotomy between March 2018 to June 2020 with glioblastoma (GBM) and solitary brain metastases (SBM) diagnoses. Cases with an intraoperative ultrasound study were included. Images were acquired before dural opening, first in B-mode, and then using the strain elastography module. After image pre-processing, an analysis based on deep learning was conducted using the open-source software Orange. We have trained an existing neural network to classify tumors into GBM and SBM via the transfer learning method using Inception V3. Then, logistic regression (LR) with LASSO (least absolute shrinkage and selection operator) regularization, support vector machine (SVM), random forest (RF), neural network (NN), and k-nearest neighbor (kNN) were used as classification algorithms. After the models’ training, ten-fold stratified cross-validation was performed. The models were evaluated using the area under the curve (AUC), classification accuracy, and precision.

Results: A total of 36 patients were included in the analysis, 26 GBM and 10 SBM. Models were built using a total of 812 ultrasound images, 435 of B-mode, 265 (60.92%) corresponded to GBM and 170 (39.8%) to metastases. In addition, 377 elastograms, 232 (61.54%) GBM and 145 (38.46%) metastases were analyzed. For B-mode, AUC and accuracy values of the classification algorithms ranged from 0.790 to 0.943 and from 72 to 89%, respectively. For elastography, AUC and accuracy values ranged from 0.847 to 0.985 and from 79% to 95%, respectively.

Conclusion: Automated processing of ultrasound images through deep learning can generate high-precision classification algorithms that differentiate glioblastomas from metastases using intraoperative ultrasound. The best performance regarding AUC was achieved by the elastography-based model supporting the additional diagnostic value that this technique provides.

Introduction

Glioblastomas (GBM) represent approximately 40% to 50% of all malignant brain tumors (1). Brain metastases range from 9 to 17% in patients diagnosed with cancer; they may appear as single lesions and be the first manifestation of malignancy in 30%–50% of cases (24). The proper distinction of these tumors is essential because they have different treatments and prognosis.

The differential diagnosis of GBM and solitary brain metastases (SBM) can be difficult due to the similarity in conventional neuroimaging tests; both can present like single lesions, contrast-enhancing, with a cystic necrotic appearance and extensive involvement of perilesional white matter. Distinguishing them is particularly complicated when there is no evidence of a previous neoplasm. In these cases, more specific techniques such as PET (Positron Emission Tomography), specialized magnetic resonance imaging (MRI) sequences such as spectroscopy, diffusion/perfusion, and other forms of quantitative analysis can be used to clarify the origin of these lesions (518); however, in many centers, these techniques are not available, their acquisition and interpretation can sometimes be challenging and have a non-negligible margin of error.

Intraoperative diagnosis using frozen samples enables discriminating glial tumors from SBM obtaining a histopathological diagnosis after starting the tumor resection. Thus, it would be helpful to establish a surgical planning in the earliest stage of surgery. In the case of GBM, in our center, as in many others, the policy adopted is to try to carry out a supratotal resection whenever possible, taking into account the relationship with functional areas. In lobectomies, for example, resection includes non-enhancing tumor regions. This technique has been shown to improve the overall survival of these patients (1925).

On the other hand, in metastases, resection is limited exclusively to the contrast-enhancing tumor component, because it is recognized that peritumoral MRI signal alterations are exclusively produced by vasogenic edema (26); therefore, there is still insufficient evidence to support supramarginal resections in these patients (27). Besides, in some cases, partial resections of brain metastases near functional areas might be indicated as a previous step to adjuvant therapies.

Intraoperative ultrasound is a low-cost, portable, fast technique that provides dynamic information in a real-time fashion. It has been widely used in brain tumor resection (28, 29); the simplicity of its application makes it a valuable intraoperative imaging option. Elastography is a relatively new technique in brain tumor pathology. Several publications highlight the importance of this technique because it provides better image contrast compared to B-mode and especially because it allows the characterization of elasticity patterns of the tumor and peritumoral regions, through which it is possible to differentiate between several histological types (3037).

One of the disadvantages of medical imaging techniques is, of course, their interpretation. Regarding ultrasound, this technique presents challenges such as operator dependency, noise and artifacts. Deep learning is a branch of machine learning that has emerged to improve classification tasks using visual computer systems. The basic idea is that medical images have much more information than the human eye can process and distinguish. Deep learning involves the computation of hierarchical features or representations of a sample, in which high-level abstract features are defined by combining them with other low-level features (38). A deep learning approach based on convolutional neural networks (CNN) is getting attention in the medical image field. Artificial neural networks use a multi-step process that automatically learns features from an image and then extracts them to perform a classification task using an algorithm. CNN’s are designed to automatically and adaptively learn features from data through backpropagation using multi-block reconstruction called convolution layers, pooling layers, and fully connected layers (39). Transfer learning is a technique that allows the use of a pre-trained CNN model. It has been previously used in oncological classification tasks with high accuracy. Several studies have demonstrated the ability of transfer learning to work with small datasets using minimal image pre-processing (4044).

The objective of our work is to use intraoperative ultrasound images and a CNN-based deep learning model in order to differentiate GBM from SBM. We seek to assess the intraoperative ultrasound accuracy on this task while comparing B-mode against an emerging technique such as elastography.

Material and Methods

Patient Selection

A retrospective analysis of patients diagnosed with supratentorial tumors who underwent surgery by craniotomy between March 2018 and June 2020 was performed. Those cases with histopathological diagnosis of glioblastomas and metastases that had an intraoperative ultrasound study were included. Approval was obtained from the ethics committee of our center as well as patient’s informed consent in all cases. Clinical, radiological and histopathological variables were collected.

Acquisition and Pre-Processing of Ultrasound Images

For the intraoperative ultrasound study, we used the Hitachi Noblus with a C42 convex probe, 8-10 MHz frequency range, 20 mm scan width radius and 80° scan angle of field of view. The images were acquired after the craniotomy and before the dural opening. The probe is placed perpendicular to the dura; manual compressions were performed maintaining a constant rhythm and intensity. More details of the elastogram acquisition technique are described in a previous publication by our group (34). The ultrasound machine generates a real-time color map called elastogram simultaneously with B-mode. Figure 1. The color scale represents the tissue’s elasticity/consistency, with tones ranging from red (soft) to blue (hard). Elastograms and B-mode images attempted to cover the highest possible tumor volume and peritumoral areas with evident echogenicity changes. Several slices in different planes were acquired. The images were stored in DICOM format for offline processing.

FIGURE 1
www.frontiersin.org

Figure 1 Example of intraoperative ultrasound images. 65-year-old man with a right frontal glioblastoma. (A) Elastogram showing the difference in consistency between the tumor and the peritumoral region (green - red) from the rest of the healthy parenchyma (blue). In the right-lower part of the image, a graphic representation of the external compression waves is observed. (B) Simultaneous image in B-mode.

The open-source software ImageJ version 1.50i (National Institutes of Health, Maryland, United States) was used to pre-process the ultrasound images. The first step was to convert DICOM files to 8-bit TIFF format. For the B-mode images, the tumor and peritumoral area were cropped, removing possible small peripheral artifacts and dark areas. Images with significant artifacts or with unrecognizable areas on elastography were excluded. In the case of elastograms, the area of the elastogram was cut out by removing the periphery that corresponded to zones in B-mode. A rescaling was then performed at 299 x 299 pixels; the intensities were normalized, despeckling, and smoothing by Gaussian blur filter was carried out; thus, we obtained images with similar intensities and standardized size for the analysis. Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2 Intraoperative ultrasound images pre-processing. Left: original images of (A) elastogram and (B) B-mode. Right: Final image available for automatic analysis.

Analysis Using Deep CNN via Transfer Learning

For the generation of an image classification system, the open-source software Orange version 3.26 (University of Ljubljana, Slovenia) was used. The software has a user-friendly interface based on a work panel and the use of widgets. Supplementary Figure 1. After importing the images into the workspace, the first step consisted in the process called embedding. Using preprocessed ultrasound images, we have trained an existing neural network to classify tumors into GBM and SBM. Thus, we have used a transfer learning method applying Inception V3, one of the most popular image recognition models that have been previously adapted to the analysis of medical images with excellent results (4547). Inception V3 is a 48-layer convoluted neural network trained in 1.2 million images from the ImageNet repository (48); each image in the ImageNet Large Scale Visual Recognition Challenge repository belongs to one of the 1000 defined classes. Inception architecture is schematically summarized in Figure 3. Embedding process relies on the penultimate of these deep networks, where transfer learning is achieved by encoding images with characteristics of this layer, so each image is embedded in a 2048-element vector, followed by classic machine learning algorithms.

FIGURE 3
www.frontiersin.org

Figure 3 Schematic representation of Inception v3 architecture (adapted from GoogLeNet) and the workflow used in the transfer learning process via convolutional neural network and classification algorithms.

Hierarchical Clustering

In the first phase of machine learning, and without previously establishing categories or classes, the distances of the vector representations of all the images were calculated using the cosine as a distance metric. From the distance matrix, a hierarchical classification was made into groups called clusters. The software automatically detects related elements in search of patterns. To determine the elements included in each cluster, an analysis of the GBM and SBM categories’ distribution within each subgroup was performed, both in B-mode and elastography. These groups are represented graphically through the development of a dendrogram.

Classifiers and Model Validation

In order to develop a prediction model, the following classification algorithms were used: logistic regression (LR) applying LASSO (least absolute shrinkage and selection operator) regularization, Neural Network (NN), Random Forest (RF), Support Vector Machine (SVM), and k-Nearest Neighbor. Model validation was performed using a ten-fold stratified cross-validation. Most of the sample was used in the construction or learning process of the model, leaving a portion of the sample for the validation of its predictions, the stratification maintains the proportion of both categories, namely GBM and SBM, this step is repeated several times guaranteeing that the cases were distributed randomly as part of the training and test group. For this reason, cross-validation has proved to be superior to the simple split random sampling. The models were evaluated using the AUC (area under the curve)/ROC (receiver operating characteristics), classification accuracy (CA) and precision scores. Furthermore, confusion matrices were developed to determine the correct and misclassified cases for each algorithm.

Comparison of the Automatic Model Versus Experienced Human Observers

After establishing the best classification algorithm, a training set made up of 70% of the sample was randomly selected, a predictive model was built and then was validated in the test-data set, 30% remaining of the sample, keeping the proportion of each of the classes. Using the same test-data set, two expert observers analyzed the images, classifying them as GBM and SBM according to their judgment. One of them is a senior neuroradiologist with ten years of experience, and the second observer is a neurosurgeon with thirty years of experience and knowledge about intraoperative ultrasound images. Both observers were blinded to the definitive histopathological diagnosis. Their results were compared with the automatic algorithm.

Results

Thirty-six patients were included during the study period. Twenty-six cases corresponded to GBM and 10 to metastases. The histological diagnoses, radiological and demographic characteristics, are summarized in Table 1. Illustrative cases and their appearance on MRI and intraoperative ultrasound images are shown in Figure 4.

TABLE 1
www.frontiersin.org

Table 1 Patient characteristics.

FIGURE 4
www.frontiersin.org

Figure 4 Illustrative cases of the use of intraoperative ultrasound. (A) Axial T1 weighted post-contrast (T1WC) image of a 50-year-old man with a right temporal glioblastoma. (B) Elastogram (left) and B-mode (right). It is a soft tumor with small cystic regions and a peritumoral region of low stiffness compared to the healthy parenchyma. (C) Axial T1WC image of a 70-year-old woman with a left occipital glioblastoma. (D) The elastogram shows a cystic/necrotic lesion with a nodular component of intermediate consistency and a relatively soft peritumoral region. (E) Coronal T1WC image of a 45-year-old man with a right parietal lung metastasis. (F) The elastography image shows a solid/cystic lesion with a soft nodular component and a stiffer peritumoral region. (G) Axial T1WC image of a 52-year-old man with no history of systemic cancer with a left parieto-occipital metastasis. (H) The elastogram shows a large cystic lesion with a small hard region and a peritumoral region of similar consistency.

Models were built using a total of 812 ultrasound images, 435 of B-mode, 265 (60.92%) corresponded to GBM and 170 (39.8%) to metastases. In addition, 377 elastograms, 232 (61.54%) GBM and 145 (38.46%) metastases were analyzed. Figure 5. The average of B-mode images was twelve images per patient, while for elastography, an average of eleven images was analyzed for each case. The difference in the number of images between the two modalities is because several images were discarded due to their low quality or to the presence of noise/artifacts.

FIGURE 5
www.frontiersin.org

Figure 5 Flow chart of patient and ultrasound image selection process.

By hierarchical clustering, two main groups of images were identified. For B-mode, the first cluster included 65% of GBM images and the second cluster 46.45% metastasis. For elastography, the first cluster contained 80.3% GBM and the second cluster 82.3% metastases. The dendrogram of Figure 6 graphically demonstrated the distribution and results of the classification.

FIGURE 6
www.frontiersin.org

Figure 6 Graphical representation of the clusters generated from the distance matrices after analyzing images in (A) B-mode and (B) Elastography. Left: dendrograms of the top two clusters. Right: Bar graph of the probabilities of being assigned to each cluster of glioblastomas (blue) and metastases (red).

The performance of the classification algorithms was represented graphically using the ROC (Receiver Operating Characteristics) curves. Figure 7. For B-mode, the classification algorithms’ AUC and accuracy values ranged from 0.790 to 0.943 and from 72 to 89%, respectively. Table 2. Elastography-based model demonstrated the best performance since AUC and accuracy values ranged from 0.847 to 0.985 and 79 to 95%. Table 3 and Supplementary Figure 2.

FIGURE 7
www.frontiersin.org

Figure 7 Representation of classifier performance using the ROC (Receiver Operating Characteristics) curve for (A) B-mode and (B) Elastography. The best results were obtained by the Support Vector Machine (SVM) and K-Nearest Neighbor (k-NN) algorithms.

TABLE 2
www.frontiersin.org

Table 2 Diagnostic performance of classification algorithms based on Ultrasound B-mode images.

TABLE 3
www.frontiersin.org

Table 3 Diagnostic performance of classification algorithms based on Ultrasound Elastography images.

After the random selection of cases, the human observers’ results versus the automatic selection algorithm are summarized in Table 4. The accuracy achieved by the experienced observers was up to 61% in the case of B-mode and 68% for elastography. For the CNN-based automatic system, the accuracy was 88% in B-mode and 93% in elastography.

TABLE 4
www.frontiersin.org

Table 4 Comparison between convolutional neural network (CNN)-SVM model performance and the two expert observers.

Discussion

In the present study, we have developed a highly accurate classification system that allows GBM to be differentiated from SBM using automatic intraoperative ultrasound image processing through convolutional neural networks. Furthermore, elastography showed slightly better performance for the classification of these tumors compared to the B-mode.

Among the strengths of our work, we can mention that it is the first time that intraoperative ultrasound B-mode and elastography are applied to discriminate glioblastomas from metastases. Besides, our study follows a cutting-edge methodology, in which deep learning techniques are applied in the analysis of ultrasound images, the combination of these two technologies in brain tumor pathology has no previous references in the literature.

We are aware of our limitations, it is worth mentioning that the sample size from which we started is relatively small. This aspect can cause an overfitting problem and the creation of an over-optimistic predictive model. Our strategy to overcome this issue was to use all the images available in each case, including different sections and projections of each tumor. We reached a sample size and a balance of classes enough to carry out an analysis based on artificial intelligence techniques.

On the other hand, we recognize the limitations that elastography holds, such as the variability of elasticity threshold values and the absence of an image quality control; also, they often contain irrelevant patterns that can difficult both handcrafted feature extraction and DL methods such as CNN.

Deep learning, a branch of machine learning, can automatically process and learn mid-level and high-level abstract characteristics acquired from raw data, in this case, ultrasound images. Still, tumor classification into subtypes is difficult due to variations in shape, size, intensities, and because different histological types can show similar patterns.

The image acquisition and processing methodology are rigorous and clear in every step. Strain type elastography is a technique used in previous studies, and with promising results regarding its application in the resection of brain tumors (34, 36). Pre-processing ultrasound images is a fundamental step, which has been performed with the highest reliability, applying a user-friendly open-source software that performs robust analysis without adding complexity (49, 50).

The analysis through deep learning has been demonstrated to be superior in image recognition compared to conventional radiological techniques and handcrafted radiomics (51, 52). The methods applied in our study, suppress some cumbersome steps such as tumor segmentation, which implies a significant limitation in this type of work and may bias the selection of variables of interest, such as texture features. The difference is that CNN, through transfer learning, takes advantage of a previously trained network of proven validity, to generate classification systems that automatically and without human intervention can distinguish between one class or another, in our case, GBM from SBM. A disadvantage of these systems is the inability to know which characteristics the software has used to generate its predictions, sometimes compared to a “black box” (53). Although feature selection techniques could be applied after converting images to vector representations, these techniques are still not validated. Using DL models, we can lose interpretability in exchange for gaining more robust and generalizable prediction systems based on much more complex characteristics.

A comparison has been conducted between the classification algorithms and experienced human observers to discriminate these tumor types using ultrasound images in our study. According to our results, the DL based model seems to be more precise and accurate to differentiate one tumor type from another. These findings make it necessary to improve our knowledge about how artificial intelligence works, only in this way, these new technological resources will serve as support tools in neurosurgical and radiological fields.

In our study, the best performance regarding AUC and accuracy was achieved by the elastography-based model compared to B-mode in the classification task of SBM and GBM. One possible explanation for this advantage could be the better contrast offered by the color images of the elastograms, as previously published (33). Furthermore, we believe that one of the fundamental differences between the two tumor types is the correlation between the peritumoral regions’ histology and their radiological appearance. Although this correlation has not been proven, previous studies indicate that the elasticity patterns differ in gliomas and metastases (34, 54). These differences do not seem to be captured with the B-mode; thus, the peritumoral areas’ elastographic pattern could differentiate both histological types through automatic analysis of this imaging modality. It is worth mentioning that the elastograms are produced in an RGB (red-green-blue) image format. Therefore, the elastogram results from the superposition of the B-mode image and the colorimetric scale of the tissues’ elasticity. Our study has been carried out based on the original image produced by the ultrasound machine because we wanted to use the same pictures in the classification task by human observers. Another alternative for future studies could be to perform an image decomposition in HSB (hue-saturation-brightness) format and then extract the hue component.

Regarding the differentiation of GBM from SBM, we know that there are multiple radiological techniques available for pre-operative or non-invasive applications (5, 811, 13, 1517, 55); besides, intraoperative histopathological techniques are currently the reference parameter for decision-making (56). Our study does not intend to make a comparison with the techniques mentioned above but to demonstrate, on the one hand, the high value that ultrasound and especially elastography owns in the study of brain tumors, and on the other hand, highlight that automatic image processing is a highly reliable technique. Therefore, we believe that it is essential to develop automatic ultrasound image analysis methods to increase the precision in the diagnosis, evaluation, and interventionism based on this technique.

Our work demonstrates that automated processing of ultrasound images through deep learning can generate high-precision classification algorithms that differentiate glioblastomas from metastases using intraoperative ultrasound. The best performance regarding AUC and accuracy was achieved by the elastography-based model, supporting the additional value that this technique provides by analyzing brain tumor elasticity. With our results, the next step will be to obtain real-time feedback based on intraoperative image analysis, allowing the surgeon to adapt the surgical strategy and even guide tumor resection.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by University Hospital Río Hortega Ethics Committee. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

Conception and design: SC and RS. Material preparation, data collection, and analysis were performed by SC, SG-G, MV-C, GF-P, IA, MF-P, and TZ. The first draft of the manuscript was written by SC and all authors commented on previous versions of the manuscript. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fonc.2020.590756/full#supplementary-material

Supplementary Figure 1 | Orange visual environment and the workflow used in the construction of the predictive model.

Supplementary Figure 2 | Confusion matrices generated by the different classification algorithms based on (left) B-Mode and (right) Elastography. (A) k-Nearest Neighbor; (B) Logistic Regression; (C) Neural Network; (D) Random Forest and (E) Support Vector Machine. The number of instances correctly (purple) and misclassified (pink) are shown.

References

1. Sherwood PR, Stommel M, Murman DL, Given CW, Given BA. Primary malignant brain tumor incidence and Medicaid enrollment. Neurology (2004) 62:1788–93. doi: 10.1212/01.WNL.0000125195.26224.7C

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Schiff D. Single Brain Metastasis. Curr Treat Options Neurol (2001) 3:89–99. doi: 10.1007/s11940-001-0027-4

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Giordana MT, Cordera S, Boghi A. Cerebral metastases as first symptom of cancer: a clinico-pathologic study. J Neurooncol (2000) 50:265–73. doi: 10.1023/a:1006413001375

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Nayak L, Lee EQ, Wen PY. Epidemiology of brain metastases. Curr Oncol Rep (2012) 14:48–54. doi: 10.1007/s11912-011-0203-y

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Abdel Razek AAK, Talaat M, El-Serougy L, Abdelsalam M, Gaballa G. Differentiating Glioblastomas from Solitary Brain Metastases Using Arterial Spin Labeling Perfusion– and Diffusion Tensor Imaging–Derived Metrics. World Neurosurg (2019) 127:e593–8. doi: 10.1016/j.wneu.2019.03.213

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Maurer MH, Synowitz M, Badakshi H, Lohkamp LN, Wüstefeld J, Schäfer ML, et al. Glioblastoma multiforme versus solitary supratentorial brain metastasis: Differentiation based on morphology and magnetic resonance signal characteristics. RoFo Fortschr auf dem Gebiet der Rontgenstrahlen und der Bildgeb Verfahren (2013) 185:235–40. doi: 10.1055/s-0032-1330318

CrossRef Full Text | Google Scholar

7. Zhang G, Chen X, Zhang S, Ruan X, Gao C, Liu Z, et al. Discrimination Between Solitary Brain Metastasis and Glioblastoma Multiforme by Using ADC-Based Texture Analysis: A Comparison of Two Different ROI Placements. Acad Radiol (2019) 26:1466–72. doi: 10.1016/j.acra.2019.01.010

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Qian Z, Li Y, Wang Y, Li L, Li R, Wang K, et al. Differentiation of glioblastoma from solitary brain metastases using radiomic machine-learning classifiers. Cancer Lett (2019) 451:128–35. doi: 10.1016/j.canlet.2019.02.054

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Petrujkić K, Milošević N, Rajković N, Stanisavljević D, Gavrilović S, Dželebdžić D, et al. Computational quantitative MR image features - a potential useful tool in differentiating glioblastoma from solitary brain metastasis. Eur J Radiol (2019) 119:108634. doi: 10.1016/j.ejrad.2019.08.003

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Askaner K, Rydelius A, Engelholm S, Knutsson L, Lätt J, Abul-Kasim K, et al. Differentiation between glioblastomas and brain metastases and regarding their primary site of malignancy using dynamic susceptibility contrast MRI at 3T. J Neuroradiol (2019) 46:367–72. doi: 10.1016/j.neurad.2018.09.006

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Sunwoo L, Yun TJ, You SH, Yoo RE, Kang KM, Choi SH, et al. Differentiation of glioblastoma from brain metastasis: Qualitative and quantitative analysis using arterial spin labeling MR imaging. PloS One (2016) 11:1–13. doi: 10.1371/journal.pone.0166662

CrossRef Full Text | Google Scholar

12. Byrnes TJD, Barrick TR, Bell BA, Clark CA. Diffusion tensor imaging discriminates between glioblastoma and cerebral metastases in vivo. NMR BioMed (2011) 24:54–60. doi: 10.1002/nbm.1555

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Artzi M, Bressler I, Ben Bashat D. Differentiation between glioblastoma, brain metastasis and subtypes using radiomics analysis. J Magn Reson Imaging (2019) 50:519–28. doi: 10.1002/jmri.26643

PubMed Abstract | CrossRef Full Text | Google Scholar

14. She D, Xing Z, Cao D. Differentiation of Glioblastoma and Solitary Brain Metastasis by Gradient of Relative Cerebral Blood Volume in the Peritumoral Brain Zone Derived from Dynamic Susceptibility Contrast Perfusion Magnetic Resonance Imaging. J Comput Assist Tomogr (2019) 43:13–7. doi: 10.1097/RCT.0000000000000771

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Tsougos I, Svolos P, Kousi E, Fountas K, Theodorou K, Fezoulidis I, et al. Differentiation of glioblastoma multiforme from metastatic brain tumor using proton magnetic resonance spectroscopy, diffusion and perfusion metrics at 3 T. Cancer Imaging (2012) 12:423–36. doi: 10.1102/1470-7330.2012.0038

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Bauer AH, Erly W, Moser FG, Maya M, Nael K. Differentiation of solitary brain metastasis from glioblastoma multiforme: a predictive multiparametric approach using combined MR diffusion and perfusion. Neuroradiology (2015) 57:697–703. doi: 10.1007/s00234-015-1524-6

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Skogen K, Schulz A, Helseth E, Ganeshan B, Dormagen JB, Server A. Texture analysis on diffusion tensor imaging: discriminating glioblastoma from single brain metastasis. Acta Radiol (2019) 60:356–66. doi: 10.1177/0284185118780889

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Kadota Y, Hirai T, Azuma M, Hattori Y, Khant ZA, Hori M, et al. Differentiation between glioblastoma and solitary brain metastasis using neurite orientation dispersion and density imaging. J Neuroradiol (2020) 47:197–202. doi: 10.1016/j.neurad.2018.10.005

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Eyüpoglu IY, Hore N, Merkel A, Buslei R, Buchfelder M, Savaskan N. Supra-complete surgery via dual intraoperative visualization approach (DiVA) prolongs patient survival in glioblastoma. Oncotarget (2016) 7:25755–68. doi: 10.18632/oncotarget.8367

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Esquenazi Y, Friedman E, Liu Z, Zhu J-J, Hsu S, Tandon N. The Survival Advantage of “Supratotal” Resection of Glioblastoma Using Selective Cortical Mapping and the Subpial Technique. Neurosurgery (2017) 81:275–88. doi: 10.1093/neuros/nyw174

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Roh TH, Kang S-G, Moon JH, Sung KS, Park HH, Kim SH, et al. Survival benefit of lobectomy over gross-total resection without lobectomy in cases of glioblastoma in the noneloquent area: a retrospective study. J Neurosurg (2020) 132:895–901. doi: 10.3171/2018.12.JNS182558

CrossRef Full Text | Google Scholar

22. Al-Holou WN, Hodges TR, Everson RG, Freeman J, Zhou S, Suki D, et al. Perilesional Resection of Glioblastoma Is Independently Associated With Improved Outcomes. Neurosurgery (2020) 86:112–21. doi: 10.1093/neuros/nyz008

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Mampre D, Ehresman J, Pinilla-Monsalve G, Osorio MAG, Olivi A, Quinones-Hinojosa A, et al. Extending the resection beyond the contrast-enhancement for glioblastoma: feasibility, efficacy, and outcomes. Br J Neurosurg (2018) 32:528–35. doi: 10.1080/02688697.2018.1498450

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Jiang H, Cui Y, Liu X, Ren X, Li M, Lin S. Proliferation-dominant high-grade astrocytoma: survival benefit associated with extensive resection of FLAIR abnormality region. J Neurosurg (2019) 22:1–8. doi: 10.3171/2018.12.JNS182775

CrossRef Full Text | Google Scholar

25. Glenn CA, Baker CM, Conner AK, Burks JD, Bonney PA, Briggs RG, et al. An Examination of the Role of Supramaximal Resection of Temporal Lobe Glioblastoma Multiforme. World Neurosurg (2018) 114:e747–55. doi: 10.1016/j.wneu.2018.03.072

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Pekmezci M, Perry A. Neuropathology of brain metastases. Surg Neurol Int (2013) 4:245. doi: 10.4103/2152-7806.111302

CrossRef Full Text | Google Scholar

27. Hardesty DA, Nakaji P. The Current and Future Treatment of Brain Metastases. Front Surg (2016) 25:3–30. doi: 10.3389/fsurg.2016.00030

CrossRef Full Text | Google Scholar

28. Jakola AS, Unsgård G, Solheim O. Quality of life in patients with intracranial gliomas: the impact of modern image-guided surgery. J Neurosurg (2011) 114:1622–30. doi: 10.3171/2011.1.JNS101657

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Munkvold BKR, Jakola AS, Reinertsen I, Sagberg LM, Unsgård G, Solheim O. The Diagnostic Properties of Intraoperative Ultrasound in Glioma Surgery and Factors Associated with Gross Total Tumor Resection. World Neurosurg (2018) 115:e129–36. doi: 10.1016/j.wneu.2018.03.208

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Chauvet D, Imbault M, Capelle L, Demene C, Mossad M, Karachi C, et al. In Vivo Measurement of Brain Tumor Elasticity Using Intraoperative Shear Wave Elastography. Ultraschall Med - Eur J Ultrasound (2015) 37:584–90. doi: 10.1055/s-0034-1399152

CrossRef Full Text | Google Scholar

31. Chan HW. Optimising the Use and Assessing the Value of Intraoperative Shear Wave Elastography in Neurosurgery. Doctoral thesis. University of London (2016).

Google Scholar

32. Selbekk T, Bang J, Unsgaard G. Strain processing of intraoperative ultrasound images of brain tumours: Initial results. Ultrasound Med Biol (2005) 31:45–51. doi: 10.1016/j.ultrasmedbio.2004.09.011

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Selbekk T, Brekken R, Indergaard M, Solheim O, Unsgård G. Comparison of contrast in brightness mode and strain ultrasonography of glial brain tumours. BMC Med Imaging (2012) 12:11. doi: 10.1186/1471-2342-12-11

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Cepeda S, Barrena C, Arrese I, Fernandez-Pérez G, Sarabia R. Intraoperative Ultrasonographic Elastography: A Semi-Quantitative Analysis of Brain Tumor Elasticity Patterns and Peritumoral Region. World Neurosurg (2020) 135:e258–70. doi: 10.1016/j.wneu.2019.11.133

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Prada F, Del Bene M, Moiraghi A, Casali C, Legnani FG, Saladino A, et al. From Grey Scale B-Mode to Elastosonography: Multimodal Ultrasound Imaging in Meningioma Surgery—Pictorial Essay and Literature Review. BioMed Res Int (2015) 2015:1–13. doi: 10.1155/2015/925729

CrossRef Full Text | Google Scholar

36. Prada F, Del Bene M, Rampini A, Mattei L, Casali C, Vetrano IG, et al. Intraoperative Strain Elastosonography in Brain Tumor Surgery. Oper Neurosurg (2019) 17:227–36. doi: 10.1093/ons/opy323

CrossRef Full Text | Google Scholar

37. Chakraborty A, Berry G, Bamber J, Dorward N. Intra-operative Ultrasound Elastography and Registered Magnetic Resonance Imaging of Brain Tumours: A Feasibility Study. Ultrasound (2006) 14:43–9. doi: 10.1179/174313406x82461

CrossRef Full Text | Google Scholar

38. Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. (2013). http://arxiv.org/abs/1312.6229.

Google Scholar

39. Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I, et al. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging (2016) 35:1285–98. doi: 10.1109/TMI.2016.2528162

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Zhou L, Zhang Z, Chen Y-C, Zhao Z-Y, Yin X-D, Jiang H-B. A Deep Learning-Based Radiomics Model for Differentiating Benign and Malignant Renal Tumors. Transl Oncol (2019) 12:292–300. doi: 10.1016/j.tranon.2018.10.012

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Deniz E, Şengür A, Kadiroğlu Z, Guo Y, Bajaj V, Budak Ü. Transfer learning based histopathologic image classification for breast cancer detection. Health Inf Sci Syst (2018) 6:18. doi: 10.1007/s13755-018-0057-x

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Deepak S, Ameer PM. Brain tumor classification using deep CNN features via transfer learning. Comput Biol Med (2019) 111:103345. doi: 10.1016/j.compbiomed.2019.103345

PubMed Abstract | CrossRef Full Text | Google Scholar

43. Maki S, Furuya T, Horikoshi T, Yokota H, Mori Y, Ota J, et al. A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma. Spine (Phila Pa 1976) (2020) 45:694–700. doi: 10.1097/BRS.0000000000003353

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Godec P, Pančur M, Ilenič N, Čopar A, Stražar M, Erjavec A, et al. Democratized image analytics by visual programming through integration of deep models and small-scale machine learning. Nat Commun (2019) 10:1–7. doi: 10.1038/s41467-019-12397-x

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med (2018) 24:1559–67. doi: 10.1038/s41591-018-0177-5

PubMed Abstract | CrossRef Full Text | Google Scholar

46. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature (2017) 542:115–8. doi: 10.1038/nature21056

PubMed Abstract | CrossRef Full Text | Google Scholar

47. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA (2016) 316:2402–10. doi: 10.1001/jama.2016.17216

PubMed Abstract | CrossRef Full Text | Google Scholar

48. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. (2015). http://arxiv.org/abs/1512.00567.

Google Scholar

49. Van Sloun RJG, Cohen R, Eldar YC. Deep Learning in Ultrasound Imaging. Proc IEEE (2020) 108:11–29. doi: 10.1109/JPROC.2019.2932116

CrossRef Full Text | Google Scholar

50. Liu S, Wang Y, Yang X, Lei B, Liu L, Li SX, et al. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering (2019) 5:261–75. doi: 10.1016/j.eng.2018.11.020

CrossRef Full Text | Google Scholar

51. Brehar R, Mitrea DA, Vancea F, Marita T, Nedevschi S, Lupsor-Platon M, et al. Comparison of deep-learning and conventional machine-learning methods for the automatic recognition of the hepatocellular carcinoma areas from ultrasound images. Sensors (Switzerland) (2020) 20:1–22. doi: 10.3390/s20113085

CrossRef Full Text | Google Scholar

52. Burlina P, Billings S, Joshi N, Albayda J. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods. PloS One (2017) 12:1–15. doi: 10.1371/journal.pone.0184059

CrossRef Full Text | Google Scholar

53. Parekh VS, Jacobs MA. Deep learning and radiomics in precision medicine. Expert Rev Precis Med Drug Dev (2019) 4:59–72. doi: 10.1080/23808993.2019.1585805

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Scholz M, Noack V, Pechlivanis I, Engelhardt M, Fricke B, Linstedt U, et al. Vibrography during tumor neurosurgery. J Ultrasound Med (2005) 24:985–92. doi: 10.7863/jum.2005.24.7.985

PubMed Abstract | CrossRef Full Text | Google Scholar

55. Chen C, Ou X, Wang J, Guo W, Ma X. Radiomics-based machine learning in differentiation between glioblastoma and metastatic brain tumors. Front Oncol (2019) 9:806. doi: 10.3389/fonc.2019.00806

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Chand P, Amit S, Gupta R, Agarwal A. Errors, limitations, and pitfalls in the diagnosis of central and peripheral nervous system lesions in intraoperative cytology and frozen sections. J Cytol (2016) 33:93–7. doi: 10.4103/0970-9371.182530

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: brain tumor, elastography, intraoperative ultrasound, deep learning, convolutional neural network

Citation: Cepeda S, García-García S, Arrese I, Fernández-Pérez G, Velasco-Casares M, Fajardo-Puentes M, Zamora T and Sarabia R (2021) Comparison of Intraoperative Ultrasound B-Mode and Strain Elastography for the Differentiation of Glioblastomas From Solitary Brain Metastases. An Automated Deep Learning Approach for Image Analysis. Front. Oncol. 10:590756. doi: 10.3389/fonc.2020.590756

Received: 02 August 2020; Accepted: 17 December 2020;
Published: 02 February 2021.

Edited by:

Geirmund Unsgård, Norwegian University of Science and Technology, Norway

Reviewed by:

David Bouget, SINTEF, Norway
Erik Magnus Berntsen, Norwegian University of Science and Technology, Norway
Tormod Selbekk, SonoClear AS, Norway

Copyright © 2021 Cepeda, García-García, Arrese, Fernández-Pérez, Velasco-Casares, Fajardo-Puentes, Zamora and Sarabia. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Santiago Cepeda, cepeda_santiago@hotmail.com; orcid.org/0000-0003-1667-8548

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.