Your new experience awaits. Try the new design now and help us make it even better

MINI REVIEW article

Front. Neurol., 23 September 2025

Sec. Artificial Intelligence in Neurology

Volume 16 - 2025 | https://doi.org/10.3389/fneur.2025.1581422

This article is part of the Research TopicImaging to Guide Treatment in Brain TumorsView all 6 articles

Artificial intelligence in the task of segmentation and classification of brain metastases images: current challenges and future opportunities

  • 1Department of Medical Imaging, Southwest Medical University, Luzhou, China
  • 2Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
  • 3School of Nursing, Southwest Medical University, Luzhou, China
  • 4Wound Healing Basic Research and Clinical Applications Key Laboratory of Luzhou, School of Nursing, Southwest Medical University, Luzhou, China
  • 5School of Clinical Medicine, Southwest Medical University, Luzhou, China
  • 6Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
  • 7Department of Operating Room, The Affiliated Hospital of Southwest Medical University, Luzhou, China

Brain metastases (BM) are common complications of advanced cancer, posing significant diagnostic and therapeutic challenges for clinicians. Therefore, the ability to accurately detect, segment, and classify brain metastases is crucial. This review focuses on the application of artificial intelligence (AI) in brain metastasis imaging analysis, including classical machine learning and deep learning techniques. It also discusses the role of AI in brain metastasis detection and segmentation, the differential diagnosis of brain metastases from primary brain tumors such as glioblastoma, the identification of the source of brain metastases, and the differentiation between radiation necrosis and recurrent tumors after radiotherapy. Additionally, the advantages and limitations of various AI methods are discussed, with a focus on recent advancements and future research directions. AI-driven imaging analysis holds promise for improving the accuracy and efficiency of brain metastasis diagnosis, thereby enhancing treatment plans and patient prognosis.

1 Introduction

Brain metastases (BM) are one of the common and severe complications of advanced malignant tumors, often indicating poor prognosis and posing a major challenge to clinical tumor treatment (1). The incidence of brain metastases is high, with estimates suggesting that up to one-third of cancer patients will develop brain metastases (2). With the aging of the global population, advances in systemic treatment, and the widespread use of imaging technologies such as magnetic resonance imaging (MRI), the detection and diagnosis rates of BM have been increasing (3, 4). Among the primary tumors that lead to BM, lung cancer, breast cancer, and melanoma are the most common. However, cases of brain metastasis from gastrointestinal tumors, renal cell carcinoma, and gynecological cancers are also on the rise (4, 5). Moreover, it is worth noting that brain metastases significantly impact patient prognosis regardless of the primary tumor type. Taking breast cancer as an example, patients may survive for an average of up to 28 years, but once brain metastasis occurs, the average survival time is reduced to about 10 months (6). Despite the application of treatments such as monoclonal antibodies, tyrosine kinase inhibitors (TKI), and antibody-drug conjugates (ADC), which have improved overall survival (OS) to some extent, brain metastasis remains a severe challenge in breast cancer treatment. Therefore, there is an urgent need to develop timely and precise imaging detection, segmentation, and classification technologies to detect and diagnose brain metastases early, thus providing patients with more time for treatment and improving prognosis.

First, precise detection and segmentation helps doctors accurately assess tumor size, location, and number, providing accurate targeting for local treatments such as surgery and radiotherapy. Achieving this precise assessment relies on imaging technology support. MRI can detect small lesions with high sensitivity and clearly display critical tumor characteristics, making it the recognized main tool for diagnosing BMs (4, 7, 8). However, computed tomography (CT) scanning still maintains irreplaceable value in preliminary screening of brain metastases, rapid assessment in emergency situations, and special clinical scenarios such as medical institutions with limited equipment conditions or patients with MRI contraindications (9, 10). Based on these imaging technologies, precise treatment of brain metastases can be effectively implemented. Stereotactic radiosurgery (SRS) has become an important treatment modality for brain metastases due to its ability to deliver highly focused radiation to metastatic regions while minimizing damage to surrounding normal brain tissue. The implementation of such precise treatments demands extremely high accuracy in imaging detection and segmentation (11). Nevertheless, traditional manual detection and segmentation methods have obvious limitations. On one hand, manual detection and segmentation processes are time-consuming and cumbersome (12); on the other hand, target delineation is susceptible to observer subjectivity, and differences between physicians may increase uncertainty in radiotherapy planning (2). Therefore, the demand for artificial intelligence-based automated detection and segmentation technologies is increasingly urgent. Currently, many deep learning algorithms have been applied to the detection and segmentation tasks of brain metastasis images (13), aiming to improve management efficiency and treatment outcomes for patients with multiple metastases. Precise automated detection and segmentation not only enhance the reliability and efficiency of treatment planning but also establish an important foundation for subsequent fine-grained analysis based on imaging features and the development of personalized treatment strategies.

The classification of brain metastases faces several challenges, such as nature determination, source identification, and treatment response evaluation. The primary challenge is distinguishing brain metastases from primary brain tumors, such as glioblastoma (GBM). For patients previously diagnosed with malignant tumors, new brain lesions require clarification on whether they are primary brain tumors such as GBM or brain metastases from the primary cancer. GBM and brain metastases demonstrate high radiological similarity, typically presenting as rim-enhancing lesions with surrounding T2 hyperintensity (14), but their treatment strategies differ drastically, making accurate preoperative differentiation crucial. For patients presenting with brain lesions as their primary manifestation, identifying the origin of brain metastases has significant clinical implications. Brain metastases from different primary sites require distinct therapeutic approaches; differentiating between metastases originating from lung cancer, breast cancer, melanoma, and other sources helps guide the selection of personalized treatment protocols such as targeted therapies for specific gene mutations and immune checkpoint inhibitors (15). However, accurately determining the origin of brain metastases poses numerous challenges when definitive information about the primary lesion is lacking. While conventional neuropathological examination serves as the gold standard, it carries surgical risks as an invasive procedure, including hemorrhage, infection, and neurological function impairment. Some patients cannot tolerate such examinations due to poor physical condition or deep-seated lesion location. Additionally, routine radiological assessment, though non-invasive, makes it difficult to accurately determine the origin of brain metastases based solely on manual analysis. In clinical practice, 2–14% of patients present with brain metastases as the initial manifestation without an identified primary tumor (16). For some patients, the primary lesion remains unidentified even until the terminal stage of disease. Failure to promptly and accurately identify the origin leads to difficulties in treatment selection, missing the optimal therapeutic window, and severely affecting patient prognosis. Therefore, developing rapid, reliable, and non-invasive methods for primary tumor identification based on medical imaging, using artificial intelligence technology to assist image analysis and reduce dependence on invasive examinations, serves as an important complementary tool for clinical diagnosis, with significant value for optimizing clinical decision-making processes and improving patient treatment experiences. Beyond determining the primary tumor type, patients with brain metastases who have undergone radiotherapy require differentiation between post-radiation radiation necrosis and recurrent tumors. Research indicates that distinguishing between these two conditions using only MRI is often challenging (17). Although differentiation can be achieved through biopsy, stereotactic biopsy may lead to sampling bias in lesions containing both tumor recurrence and radiation necrosis, and biopsy carries procedure-related risks and cannot be regularly repeated (18). More importantly, the treatment strategies for these two conditions are fundamentally different—tumor recurrence requires continued treatment, while radiation necrosis necessitates cessation of radiotherapy and management of necrotic lesions, making accurate differentiation decisive for treatment decisions. These differential diagnostic tasks—whether distinguishing brain metastases from primary brain tumors, tracing the origin of primary tumors, or differentiating post-radiation necrosis from recurrent tumors—all face challenges of overlapping radiological features and diagnostic difficulties. Therefore, there is an urgent need to leverage artificial intelligence technology to analyze big data from medical imaging, improve diagnostic accuracy and efficiency, and provide stronger support for clinical decision-making.

Through early precise imaging analysis and subsequent development of personalized treatment strategies, there is potential to maximize control of tumor progression and improve patient prognosis (19). Therefore, this review focuses on recent advances in artificial intelligence (AI) applications for brain metastasis imaging analysis, encompassing deep learning-based lesion detection and segmentation, differential diagnosis between brain metastases and primary brain tumors, primary tumor origin identification, and differentiation between post-radiation radiation necrosis and recurrent tumors. This paper systematically elucidates the applications and advantages/limitations of classical machine learning methodologies and deep learning algorithms across various tasks, while also projecting future research directions in this field. The aim is to provide more effective decision support for clinical practice and promote further development of relevant artificial intelligence technologies (Figure 1).

Figure 1
Diagram illustrating the use of machine learning and deep learning for brain metastases. The image shows tasks like segmentation and classification, featuring algorithms like convolutional neural networks and classical machine learning. It highlights tools such as DeepMedic and U-Net, demonstrating applications in differentiating brain tumors and identifying the source of metastases.

Figure 1. Overview of the main aspects of this review. Referenced and reproduced with permission from Becker et al. (26), Fang et al. (130), Liang et al. (95), Kumar et al. (131), Prasad et al. (129), Hu et al. (46), Park et al. (66), Shi et al. (75), and Larroza et al. (85).

2 Brain metastases image detection and segmentation

In the field of artificial intelligence, tumor image detection and segmentation represent two core tasks in medical image analysis. Detection tasks aim to identify and localize tumor lesions within images, typically outputting spatial position and bounding box information of the lesions. Segmentation tasks further perform pixel-level precise delineation of tumor regions, assigning category labels to each pixel in the image, thereby achieving accurate tumor contour delineation and regional quantification. These two technologies provide clinicians with critical information such as precise tumor localization, morphological feature analysis, and volumetric measurements, playing important roles in diagnosis, treatment planning, and therapeutic efficacy assessment.

However, the detection and segmentation of brain metastases present unique clinical challenges. Brain metastases often manifest as multiple small lesions, with individual lesions potentially measuring only a few millimeters in diameter and exhibiting relatively low contrast with surrounding brain tissue on MRI images (20). These microscopic lesion characteristics make radiologists prone to missing them during visual identification, particularly when images contain noise and artifacts that further compromise accurate assessment. The scattered distribution and uncertain number of lesions additionally increase the workload and risk of missed diagnoses in manual detection (2022). Meanwhile, the low contrast features also make it difficult for physicians to precisely delineate lesion boundaries, affecting the accuracy of subsequent treatment planning (20).

Under these specific clinical circumstances, although classical machine learning methods such as threshold segmentation (23) and region growing algorithms (24) possess advantages including strong interpretability, low computational cost, and minimal hardware requirements (2527), they demonstrate significant technical limitations when confronting the complex characteristics unique to brain metastases, including small lesions, low contrast, and multifocal distribution. These methods are relatively sensitive to image quality and noise, and struggle to effectively capture the diversity of tumor morphology and irregularity of boundaries (28, 29), thereby limiting their widespread application in brain metastases detection and segmentation.

To address these challenges, researchers have developed various deep learning-based detection and segmentation methods, with technological evolution progressing from early CNN local feature extraction, to U-Net’s global–local information fusion, then to DeepMedic’s specialized 3D processing, and more recently to the intelligent development of adaptive frameworks. Deep learning technology, leveraging its powerful feature learning capabilities and effective utilization of large-scale data, has demonstrated significant advantages in processing complex image features and achieving high-precision segmentation, gradually becoming the mainstream technology in this field. In light of this, this review will focus on the applications of deep learning networks and their variants in brain metastases image detection and segmentation.

2.1 Early convolutional neural networks and variants

Convolutional neural networks (CNNs) were among the earliest deep learning networks applied to brain metastases image detection and segmentation (Figure 2). CNNs extract local features from images through convolutional layers and utilize pooling layers to reduce computational load while increasing feature robustness (30). Losch et al. (31) pioneered the application of ConvNet to brain metastases segmentation in 2015, achieving 82.8% sensitivity in detecting lesions larger than 3 millimeters, laying the foundation for deep learning applications in this field. However, early CNN methods exposed significant technical limitations, including high false positive rates (false positive rate of 0.05 per slice), insufficient segmentation accuracy for small metastatic lesions, and deficiencies in feature extraction, multi-scale information fusion, and contextual information utilization. The fundamental cause of these problems lies in the fact that traditional CNN feedforward structures lack global contextual modeling capabilities, relying solely on local convolutions making it difficult to accurately distinguish subtle differences between lesions and normal brain tissue.

Figure 2
Diagram of six convolutional neural network architectures in separate panels: (a) ConvNet shows a multi-layer structure with max pooling; (b) BMSD net illustrates steps from MRI input to brain metastasis segmentation; (c) GoogleNet uses a modified framework for generating a metastasis probability map; (d) CropNet displays layers with max pooling, convolution, and activation functions; (e) Faster R-CNN depicts layers for object detection, including proposal networks and bounding box output; (f) GHR-CNN features a complex network flow with multiple concatenation and batch normalization layers for output generation.

Figure 2. Early convolutional neural network architectures for image segmentation of brain metastases. (a) Typical architecture of a ConvNet. (b) Network architecture of BMDS net. (c) The modified GoogLeNet architecture. (d) Network architecture of CropNet. (e) Structure of our deep-learning approach faster R-CNN. (f) Network architecture of a gated high-resolution neural network. Referenced and reproduced with permission from Losch (31), Xue et al. (32), Grøvik et al. (12), Dikici et al. (33), Zhang et al. (122), and Qu et al. (34).

To overcome the limitations of traditional CNN architectures, researchers have pursued technical improvements from different perspectives. In terms of dimensional extension, the 2.5D GoogLeNet CNN model proposed by Grøvik et al. (12) attempted to strike a balance between computational efficiency and feature capture capability, better capturing inter-slice features while avoiding the computational burden of full 3D CNNs. However, this method still exhibited performance limitations in false positive control and multiple lesion detection. Regarding network architecture design, the BMDS Net cascaded 3D fully convolutional network proposed by Xue et al. (32) adopted a two-stage strategy of detection-localization followed by segmentation, improving segmentation accuracy to some extent while reducing computational complexity. Nevertheless, it still faced challenges when handling tasks involving discrimination of adjacent lesions or small-volume lesions. These early improvement efforts demonstrated that simply increasing architectural complexity cannot fundamentally resolve the core problems of CNNs in brain metastases analysis.

Recognizing the significant impact of detection tasks on segmentation performance, researchers began exploring CNN methods specifically optimized for brain metastases detection. The CropNet proposed by Dikici et al. (33) focused on the detection task of small brain metastases (≤15 mm), achieving accuracy levels comparable to large lesion detection methods for small lesions through sensitivity-constrained LoG candidate selection and targeted data augmentation strategies. The important significance of this work lies in demonstrating that precise lesion localization can effectively assist subsequent segmentation tasks. Building on this understanding, Qu et al. (34) further proposed the gated high-resolution CNN (GHR-CNN), which achieved improvements in segmentation accuracy, sensitivity, and generalization capability by maintaining high-resolution features and introducing gating mechanisms, particularly excelling in small lesion detection. This indicates that through carefully designed network structures and training strategies, a single segmentation network can also achieve good performance without strictly relying on independent detection steps.

Although CNN methods have made preliminary progress in brain metastases detection and segmentation tasks (Table 1), their inherent technical limitations restrict further performance improvements. Future CNN improvement directions should focus on collaborative optimization of detection and segmentation tasks as well as targeted network structure design. For example, architectures that fuse object detection with instance segmentation, such as Mask R-CNN, provide new technical approaches. However, the key lies in how to effectively integrate detection information into the segmentation process and how to design specialized network structures and training strategies tailored to the specific characteristics of brain metastases.

Table 1
www.frontiersin.org

Table 1. CNN-based architecture for brain metastasis segmentation.

2.2 U-Net and its variants

The limitations exposed by CNN methods in brain metastases analysis prompted researchers to seek more advanced network architectures. The U-Net architecture, through its encoder-decoder structure and skip connection design, can effectively address the deficiencies of CNNs in capturing global contextual information (Figure 3). The U-Net architecture proposed by Ronneberger et al. (35) in 2015, with its symmetric network design, enables the model to capture both high-level semantic information and preserve low-level detailed features, thus demonstrating good performance in both detection and segmentation tasks of brain metastases.

Figure 3
Five diagrams illustrate different U-Net architectures: (a) 2D U-Net shows a U-shaped network with layers labeled by operations like convolutions and pooling. (b) nnUNet outlines a modular design with components for data fingerprint analysis and network configuration. (c) 2.5D U-Net depicts a structure processing multiple slices with 3D convolutions. (d) 3D U-Net presents a fully 3D architecture handling MRI images. (e) NLMET features a bottleneck structure with residual blocks and deconvolutions. Each diagram is labeled and color-coded, highlighting various components and processes.

Figure 3. U-Net architecture for image segmentation of brain metastases. (a) Typical architecture of a 2D U-Net. (b) Network architecture of nnU-Net. (c) Typical architecture of a 2.5D U-Net. (d) Typical architecture of a 3D U-Net. (e), Structure of NLMET. Referenced and reproduced with permission from Yoo et al. (100), Pflüger et al. (39), Yoo et al. (37), and Liew et al. (38).

In the early stages of U-Net application to brain metastases analysis, researchers primarily enhanced the model’s feature extraction capabilities in detection and segmentation tasks through multimodal MRI data fusion. Bousabarah et al. (20) proposed an ensemble learning method based on multimodal 3D MRI data, combining three network structures: cU-Net, moU-Net, and sU-Net, trained with multimodal data including T1c, T2, T2c, and FLAIR, achieving good results in detecting larger volume lesions (>0.06 mL). However, multimodal fusion strategies still exhibited performance limitations in small lesion detection. Addressing this issue, Cao et al. (21) proposed an asymmetric UNet architecture (asym-UNet) from an architectural design perspective, employing different-sized convolutional kernels (3 × 3 × 3 and 1 × 1 × 3) to simultaneously process image features of small tumors and boundary information of large metastases, achieving improved results in small lesion detection tasks (diameter <10 mm). This work demonstrated that targeted architectural modifications can more effectively address specific technical challenges compared to simple data fusion.

With the development of 3D medical image processing technology, researchers began exploring more refined optimization strategies to enhance U-Net performance in brain metastases detection and segmentation tasks. Rudie et al. (22) systematically evaluated the segmentation performance of 3D U-Net in large-scale patient samples, providing benchmark data for the clinical application of this architecture. Building on these foundational works, Chartrand et al. (36) improved detection sensitivity for small brain metastases (2.5–6 mm) to 90.9% by introducing volume-aware loss functions, reducing false negative rates compared to traditional CNN models in this size range. The comparative study by Yoo et al. (37) quantified the performance differences between 2.5D and 3D architectures in detection tasks: 3D U-Net demonstrated higher sensitivity in small metastases detection, while 2.5D U-Net achieved higher detection precision. To achieve balance among different performance metrics in detection, researchers proposed weak learner fusion methods for 2.5D and 3D network prediction features, which could reduce false positive predictions for smaller lesions. The 3D non-local convolutional neural network (NLMET) method by Liew et al. (38) pushed the technical boundary of small lesion detection to 1 mm and maintained good generalization performance across different datasets and MRI sequences.

In recent years, the application of adaptive deep learning frameworks such as nnU-Net in brain metastases detection and segmentation tasks marks a new stage in the technological development of this field. Unlike traditional fixed architectural designs, these frameworks can automatically adjust network structures and training parameters according to dataset characteristics. Pflüger et al. (39) applied nnU-Net to brain metastases detection tasks, achieving detection of contrast-enhancing tumors and non-enhancing FLAIR signal abnormal regions without manual adjustment of volume threshold parameters. In their 2025 research work, Yoo et al. (13) achieved 0.904 sensitivity in brain metastases detection tasks while maintaining low false positive rates (0.65 ± 1.17) by introducing tumor volume-adaptive 3D patch adaptive data sampling (ADS) and adaptive Dice loss (ADL). These results indicate that adaptive frameworks capable of automatically adjusting according to data characteristics have performance advantages over manually designed fixed architectures.

Although U-Net-based brain metastases detection and segmentation technologies have achieved substantial progress (Table 2), further improvement in small lesion detection accuracy and effective integration of emerging network architectures remain the main technical challenges currently faced. In small lesion detection optimization, future research can explore targeted loss function designs, such as focal loss (40) and OHEM (41) methods that can effectively handle class imbalance problems and improve detection sensitivity for small lesions. In feature extraction and fusion strategies, multi-scale feature extraction, attention mechanisms, and Transformer-based fusion methods are expected to further improve small lesion recognition capabilities. Additionally, improvement of evaluation metrics is also of significant importance; for example, similarity distance (SimD) (42) can not only consider position and shape similarity but also automatically adapt to evaluation requirements for different-sized objects in different datasets. In network architecture innovation, the successful performance of emerging architectures like Transformers in natural language processing and computer vision fields has drawn considerable attention to their application potential in brain metastases analysis. For example, the nnU-NetFormer (43) method, which integrates transformer modules into the deep structure of the nnU-Net framework, can effectively extract local and global features of lesion regions in multimodal MR images, although current performance validation of such networks mainly focuses on brain tumor image segmentation tasks. Meanwhile, new training strategies such as self-supervised learning and semi-supervised learning may also provide new solutions for improving model performance and data utilization efficiency, aiming to enhance model generalization capability and clinical applicability while maintaining high accuracy.

Table 2
www.frontiersin.org

Table 2. U-Net based architecture for brain metastasis segmentation.

2.3 DeepMedic and its variants

While U-Net technology continues to evolve, researchers are also exploring other architectural solutions specifically designed for 3D medical image segmentation (Figure 4). DeepMedic, as a CNN architecture specifically designed for 3D medical image segmentation tasks, was proposed by Kamnitsas et al. (44) in 2016. Unlike U-Net, which uses 2D CNNs and captures context and precise localization through contracting and symmetric expanding paths, DeepMedic employs a dual-path architecture that can simultaneously process input images at multiple scales, thereby better combining local and larger contextual information. This design enables DeepMedic to fully utilize volumetric data, capturing richer spatial information for more accurate segmentation of brain metastases. Additionally, DeepMedic employs a dense training scheme to effectively handle 3D medical scans and address class imbalance in the data, which contrasts with U-Net’s method of combining feature maps from contracting paths with expanding paths via skip connections to preserve high-resolution information. Another notable feature of DeepMedic is its use of a 3D fully connected conditional random field (CRF) for post-processing to remove false positives, further enhancing segmentation accuracy. Currently, DeepMedic has achieved state-of-the-art performance on multiple datasets, providing a new and effective tool for brain metastasis segmentation.

Figure 4
Diagram comparing four advanced neural network architectures for brain MRI analysis: (a) DeepMedic, showing input segments, convolutional, and fully connected layers; (b) 3D U-Net + DeepMedic, highlighting preprocessing, model predictions, and ensemble learning; (c) DeepMedic+ with prior and main scans processed through network layers; (d) En-DeepMedic, displaying a detailed classification pipeline with convolution and full connected sections.

Figure 4. DeepMedic architecture for image segmentation of brain metastases. (a) Typical architecture of DeepMedic. (b) Commonly used structure of 3D U-Net integrated with DeepMedic. (c) Structure of DeepMedic+. (d), Typical architecture of En-DeepMedic. Referenced and reproduced with permission from Kamnitsas et al. (44), Hu et al. (46), Huang et al. (11), and Liu et al. (45).

The emergence of DeepMedic attracted significant attention from researchers, leading to improvements and applications. Liu et al. (45) proposed En-DeepMedic, which adds extra sub-paths to capture more multi-scale features and utilizes GPU platforms to enhance computational efficiency, further improving segmentation accuracy, particularly for small lesions. Charron et al. (2) applied DeepMedic to segment brain metastases using multi-sequence MRI data (T1, T2, FLAIR), extending its application scope. Hu et al. (46) combined 3D U-Net with DeepMedic to process integrated MRI and CT images and proposed a volume-aware Dice loss to optimize segmentation by utilizing lesion size information, aiming to further improve small lesion detection. Jünger et al. (47) trained DeepMedic using data from heterogeneous scanners from different vendors and research centers, improving the model’s generalization and robustness, making it more applicable to clinical scenarios.

To further optimize DeepMedic’s performance, researchers have continually explored new methods and strategies. Huang et al. (11) introduced the volume-level sensitivity-specificity (VSS) loss function to balance sensitivity and specificity, addressing the difficulty DeepMedic had in reconciling these two aspects and further enhancing segmentation accuracy. Kikuchi et al. (48) combined DeepMedic with black and white blood images from the simultaneously acquired VISIBLE sequence, further improving detection sensitivity and reducing false positive rates, thus providing a more reliable basis for the accurate diagnosis of brain metastases.

Although DeepMedic and its improved versions have achieved good results in brain metastases segmentation (Table 3), existing technologies still have room for improvement in edge texture recognition of multiple lesions. To address this issue, multi-scale feature extraction and edge detection mechanisms can be integrated into the DeepMedic network architecture. Multi-scale feature extraction can enhance the model’s perception capability for lesions of different sizes, while edge detection can effectively capture edge texture information of lesions. The combination of these two approaches is expected to improve the accuracy of brain metastases image recognition.

Table 3
www.frontiersin.org

Table 3. DeepMedic-based architecture for brain metastasis segmentation.

In terms of multi-scale feature extraction, inception modules or feature pyramid networks (FPN) can be introduced into the encoder part of DeepMedic. Inception modules effectively capture multi-scale information from images by using convolutional kernels of different sizes in parallel (such as 1 × 1, 3 × 3, 5 × 5, etc.), and have achieved good results in various image recognition tasks (49). FPN achieves effective fusion of features at different scales by constructing multi-level feature pyramids. For edge detection, an independent edge detection branch can be added after the output layer of DeepMedic, employing classical methods such as Sobel operators or Canny operators. The Sobel operator identifies edges by calculating the gradient of each pixel in the image in both horizontal and vertical directions, while the Canny operator is a more complex edge detection algorithm that can more accurately detect image edges and has the advantage of noise interference resistance through multi-level filtering and threshold processing (50). This improvement strategy can effectively extract edge information from segmentation results, thereby better identifying edge texture features of lesions and providing more reliable technical support for precise diagnosis and treatment of brain metastases.

Reviewing the development trajectory of CNN, U-Net, and DeepMedic architectures, the technological evolution logic of deep learning in the field of brain metastases analysis becomes clearly apparent. CNNs excel in local feature extraction but lack global contextual modeling capabilities, which directly resulted in high false positive rates in small lesion detection for early methods (such as the false positive rate of 0.05 per slice reported by Losch (31)). U-Net effectively addressed this limitation through its encoder-decoder structure and skip connection mechanisms. Its symmetric network design can both capture high-level semantic information and preserve low-level detailed features, thus generally outperforming early CNN methods in segmentation accuracy. DeepMedic adopts a dual-pathway design to simultaneously process inputs at different scales (44), possessing natural advantages when handling 3D volumetric data, although its computational complexity is relatively high.

From a performance perspective, U-Net-based adaptive frameworks demonstrate optimal application effectiveness, particularly the latest nnU-Net variants achieving over 90% sensitivity in detection tasks and Dice coefficients above 0.8 in segmentation tasks (13). However, this performance advantage comes at the cost of sacrificing interpretability, while the simple structure of CNNs makes feature visualization relatively straightforward, and DeepMedic’s dual-pathway design allows for separate analysis of contributions at different scales, providing certain advantages in interpretability. Regarding generalization ability, DeepMedic and nnU-Net perform relatively well, with the former showing good consistency across multi-center data (47) and the latter improving cross-dataset generalization ability through adaptive mechanisms (39).

Therefore, technology selection in clinical applications should be based on specific requirements: nnU-Net or improved 3D U-Net is recommended for high-precision scenarios, lightweight CNNs or 2.5D U-Net for real-time applications, DeepMedic or domain-adaptive U-Net should be prioritized for multi-center deployment, while scenarios requiring interpretability should employ CNNs combined with visualization techniques. Future research directions should focus on exploring effective integration of emerging architectures such as Transformers with existing frameworks, as well as designing composite loss functions optimized for small lesions, aiming to enhance model interpretability and generalization ability while maintaining high accuracy.

3 Brain metastases image classification tasks

3.1 Image-based differentiation between brain metastases and glioblastoma

Brain metastases (BM) and glioblastoma (GBM) represent the most common malignant brain tumors in adults. For patients with pre-existing malignancies in other sites, accurate differentiation between brain metastases and primary glioblastoma when cerebral lesions appear holds significant clinical importance (51). Brain metastases demonstrate high similarity to glioblastoma multiforme on conventional MRI, with both potentially exhibiting rim enhancement with surrounding T2 hyperintensity, ring enhancement, and intratumoral necrosis (51, 52). These similar morphological presentations make accurate differentiation based solely on conventional imaging challenging (53). However, compared to glioblastoma multiforme, brain metastases typically feature more well-defined margins and a more spherical shape. Additionally, the peritumoral region of brain metastases primarily manifests as vasogenic edema, whereas glioblastoma multiforme peritumoral areas often show tumor cell infiltration with irregular shape and invasive growth characteristics (51, 52). Accurate differentiation based on these feature distinctions is crucial for treatment strategy formulation, as brain metastasis patients may receive systemic therapy targeting the primary tumor and local treatments such as SRS, while glioblastoma multiforme requires comprehensive treatment including maximal safe resection followed by molecular classification and concurrent chemoradiotherapy (51, 54). Evidently, accurate diagnosis not only avoids unnecessary invasive examinations and reduces patient risk but also improves diagnostic efficiency and provides a basis for timely treatment. In recent years, with the rapid development of imaging technologies and artificial intelligence, researchers have continuously explored new imaging methods and analytical techniques to improve the preoperative differential diagnostic accuracy between GBM and BM (Figure 5).

Figure 5
Brain scan images comparing typical brain metastasis from lung carcinoma and glioblastoma. On the left, images (A) to (H) depict various scan views and color maps for lung carcinoma, with histology slides (I) to (K) below. On the right, images (A) to (H) show glioblastoma scans with corresponding histology slides (I) to (K) below. Both sets highlight different regions and are accompanied by color-coded maps and microscopic tissue samples.

Figure 5. Brain metastases and glioblastoma images. (a) Typical BM from lung carcinoma. (b) Typical glioblastoma (GBM). Referenced and reproduced with permission from Parvaze et al. (58).

Radiomics, an emerging imaging analysis technique, provides powerful tools for the differential diagnosis of GBM and BM (Supplementary Table S1). By extracting a large number of quantitative features from medical images, such as first-order statistics, histogram features, and texture features (e.g., absolute gradient, gray-level co-occurrence matrix, gray-level run-length matrix, gray-level size zone matrix, and neighborhood gray-difference matrix), radiomics can effectively mine diagnostic information hidden in imaging data, thus improving the accuracy of distinguishing between GBM and BM.

Researchers such as Qian et al. (53), Artzi et al. (54), and Priya et al. (52) have extracted radiomic features and used various machine learning classifiers, including support vector machines (SVM) and random forests, to build differential diagnosis models for GBM and BM, achieving high diagnostic accuracy. Some researchers have begun to explore radiomics models based on multiparametric MRI to obtain more comprehensive tumor information. Liu et al. (55) extracted radiomic features from T2-weighted and contrast-enhanced T1-weighted images and built a tree-based pipeline optimization tool (TPOT) model to differentiate GBM from BM. The results showed that the model, incorporating both MRI sequences, achieved the best predictive performance. Bijari et al. (56) extracted hidden features from four 3D MRI sequences (T1, T2, T1c, FLAIR) and generated accurate features highly correlated with model accuracy. By using logistic regression combined with multidimensional discrete wavelet transformation, a multitask learning model was implemented to distinguish GBM from BM. Huang et al. (57) treated the 1,106 features extracted from each sequence (T1, T2, T1c) as three separate tasks, using a logistic loss function as a data fitting term to build a feature selection classification model for GBM and BM classification. Parvaze et al. (58) extracted 93 radiomic features from multiparametric MRI (FLAIR, T1c, T1) and used random forests to differentiate GBM from BM. Joo et al. (59) extracted radiomic features from T1, T2, T2 FLAIR, and T1c images and developed an integrated machine learning model based on LASSO feature selection and Adaboost, SVC classifiers for multiclass classification of glioblastoma, lymphoma, and metastases. Gao et al. (60) showed that extracting diffusion kurtosis imaging (DKI) parameters and conventional MRI sequence radiomic features, combined with various machine learning algorithms, could effectively differentiate GBM from SBM. The multi-DKI parameter model demonstrated the best diagnostic performance compared to single DKI parameter and conventional MRI models. These studies show that multitask learning strategies can effectively utilize complementary information between different MRI sequences, thus improving diagnostic efficiency and accuracy. Chen et al. (61) developed a diagnostic model combining texture features from the entire tumor area and the 10 mm tumor-brain interface area, using ANOVA1LR, KW1LR, RELIEF4NB, and RFE5NB algorithms to differentiate GBM from isolated brain metastasis (BM). In summary, radiomics provides an objective and accurate approach for the differential diagnosis of GBM and BM by extracting and analyzing multidimensional imaging features and using machine learning algorithms to construct predictive models, with promising clinical applications.

Convolutional neural networks (CNN), as an optimized deep learning technique, also show significant advantages in distinguishing GBM from BM (Supplementary Table S2). CNN models, with their unique structure, can automatically extract and learn multi-level features from imaging data that are difficult for traditional imaging analysis methods to extract and quantify (51, 62), such as tumor boundary clarity, features of internal necrotic areas, and infiltration of surrounding tissue. Then, through end-to-end training, the feature extraction and classification process is gradually optimized, effectively capturing subtle differences between GBM and BM and improving diagnostic accuracy. This end-to-end training mechanism allows CNNs to gradually learn abstract features from raw images, ranging from low-level features like edges and textures in shallow convolutional layers to more complex patterns like tumor shape, structure, and spatial distribution in deeper layers.

Bae et al. (51) and Shin et al. (63) respectively built differential diagnosis models for GBM and BM using deep neural networks (DNN) and ResNet-50, achieving diagnostic performance superior to that of junior neuroradiologists. This suggests that deep learning models can reach or even exceed human experts’ performance in some cases. To better utilize imaging information, researchers have developed classification models based on 3D CNNs. Chakrabarty et al. (62) developed a 3D CNN algorithm for classifying six common brain tumors, including GBM and BM, and achieved good classification results on T1-weighted MRI scans. The 3D CNN effectively captures the spatial information of tumors, improving diagnostic performance. In addition, multiparametric MRI is widely used in deep learning models. Yan et al. (64) used a 3D ResNet-18 algorithm and multiparametric MRI (DWI and conventional MRI) to construct a differential diagnosis model for GBM and BM, finding that the model combining DWI and conventional MRI had a higher AUC than single MRI sequence models, indicating that multimodal imaging data provide richer diagnostic information. Xiong et al. (65) used the GoogLeNet model and preoperative multiplanar T1-weighted enhanced (T1CE) MRI images to automatically differentiate high-grade gliomas (HGG) from solitary brain metastasis (SBM). The model achieved an average accuracy of 92.78% in distinguishing HGG from SBM, with over 90% accuracy even when distinguishing using only the tumor core or edema region. To further enhance clinical reliability, Park et al. (66) proposed a deep ensemble network based on DenseNet121, processing multiparametric MRI images to differentiate GBM and BM. This model not only provides accurate diagnostic results but also offers predictions of uncertainty and interpretability, enhancing clinicians’ trust in the model. In summary, deep learning methods, by automatically learning and analyzing complex imaging features, provide new and effective tools for the differential diagnosis of GBM and BM, advancing precision medicine.

Although the combination of imaging technology and AI has made significant progress in the differential diagnosis of GBM and BM, further research is needed to overcome existing challenges. Future studies should develop interpretable deep learning models, such as using heatmaps and Grad-CAM methods to explain model predictions, improving their clinical application value. Additionally, the development of automated tools, such as fully automated image segmentation and feature extraction tools, can enhance research efficiency and model robustness (55, 59, 60). In conclusion, future research needs breakthroughs in expanding sample sizes, integrating multimodal imaging data, exploring more detailed tumor sub-region analysis, combining clinical information, and enhancing model interpretability, to ultimately achieve accurate differential diagnosis of GBM and BM, providing better decision support tools for clinicians.

3.2 Classification of brain metastases sources

Accurate identification of brain metastases origin holds significant importance in clinical practice, as brain metastases from different primary sites exhibit marked differences in treatment responsiveness and prognosis. For example, brain metastases from small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) are suitable for chemosensitivity therapy and surgery combined with targeted therapy, respectively (67), while brain metastases from breast cancer and melanoma may be more amenable to corresponding molecular targeted therapies or immunotherapies (68). These differences in treatment options directly impact patient survival benefits. However, in the absence of definitive primary lesion information, traditional tissue biopsy, although capable of determining the primary site, not only carries surgical risks and increases patient suffering but also proves intolerable for some patients due to factors such as poor physical condition or lesion location. Furthermore, when facing different pathological subtypes from the same organ, such as distinguishing between SCLC and NSCLC for refined classification, pathological biopsy alone often cannot provide sufficiently comprehensive information. Additional auxiliary methods such as immunohistochemical staining, molecular pathological detection, or genetic testing are typically required to clarify specific typing (67). Failure to promptly and accurately identify the origin leads to difficulties in treatment selection, affecting the optimal therapeutic window. Therefore, developing non-invasive imaging-based methods for brain metastases origin identification, using artificial intelligence technology to assist image analysis as an important complementary tool for clinical diagnosis, providing rapid and reliable auxiliary diagnostic information for clinical practice, holds significant value for optimizing treatment decisions and improving patient prognosis.

However, traditional imaging diagnostic methods often struggle to accurately identify the source of brain metastases (Figure 6). Nonetheless, studies have shown that deep learning and machine learning methods can successfully classify the source of brain metastases (Supplementary Tables S3, S4). Image texture and radiomics analysis can extract subtle features from medical images that are difficult for the human eye to recognize, such as the uniformity, roughness, and directionality of the tumor’s internal gray-level distribution. These features are closely related to the tumor’s pathological characteristics, gene expression, and biological behavior, making them useful for distinguishing brain metastases originating from different primary tumors.

Figure 6
MRI images showing different types of brain metastases. Panel A depicts lung cancer origin, with subtypes: adenocarcinoma, squamous cell carcinoma, and small cell lung carcinoma. Panel B shows breast cancer origin. Panel C displays melanoma origin. Panel D presents other origins using T1 contrast-enhanced (T1CE) and T2-weighted (T2W) images with red and green highlighted areas.

Figure 6. Images of brain metastases of different origins. (A) Lung carcinoma originated brain metastasis. The subtypes of brain metastases that arise from lung carcinoma include: (a) Adenocarcinoma. (b) Squamous cell carcinoma. (c) Small cell lung carcinoma. (B) Breast cancer originated brain metastasis. (C) Melanoma originated brain metastasis. (D), Other originated brain metastasis. Referenced and reproduced with permission from Tulum (68), Ortiz-Ramón et al. (16), and Shi et al. (75).

Classical machine learning methods have played an important role in the recognition of the primary source of brain metastases. Numerous studies have used machine learning methods to analyze texture features extracted from MRI or CT images in order to differentiate brain metastases originating from various primary tumors. Ortiz-Ramón et al. (16, 69, 70) conducted a series of studies exploring the impact of different texture features, classification models, and image modalities on brain metastasis classification. Early research (69) used 3D texture features and compared five classifiers: naive Bayes (NB), k-nearest neighbors (k-NN), multilayer perceptrons (MLP), random forests (RF), and linear kernel support vector machines (SVM). The study found that the NB classifier performed the best (AUC = 0.947 ± 0.067). Further research (70) focused on 2D texture features and used SVM and k-NN classifiers for evaluation. The results showed that the SVM classifier, combined with two gray-level co-occurrence matrix features, achieved a higher AUC (0.953 ± 0.061). In a subsequent study (16), they compared 2D and 3D texture features and found that 3D texture features were more advantageous in distinguishing brain metastases from different primary tumors. Using 3D texture features with 32 gray levels and a random forest classifier, they achieved an AUC of 0.873 ± 0.064. Béresová et al. (71) used texture analysis techniques [local binary pattern (LBP) and gray-level co-occurrence matrix (GLCM)] to extract image features and applied discriminant function analysis (DFA) to differentiate brain metastases from lung cancer and breast cancer. They compared texture features from contrast-enhanced T1-weighted images and LBP images and found that LBP image texture features were more effective in distinguishing lung cancer and breast cancer brain metastases, achieving an accuracy of 72.4%.

Kniep et al. (72) combined radiomics features with clinical data and used random forests to predict five different types of metastatic tumors, achieving AUC values ranging from 0.64 to 0.82. Zhang et al. (73) used radiomic features from brain CT images, combined with age and gender information, and applied binary logistic regression and SVM models to differentiate brain metastases from primary lung adenocarcinoma and squamous carcinoma, with AUC values of 0.828 and 0.769, respectively. Cao et al. (74) constructed and evaluated logistic regression and SVM models using selected radiomic features from individual CT, MRI, and combined images. The model showed the highest accuracy in differentiating brain metastases from lung cancer and breast cancer origins, with AUC values of 0.771 and 0.805, respectively. Tulum (68) combined traditional machine learning (SVM and MLP based on radiomics) and deep learning (EfficientNet-b0 and ResNet-50) to differentiate different subtypes of lung cancer brain metastases from MRI images. Although traditional machine learning methods performed well with small datasets, deep learning methods, through transfer learning, demonstrated higher classification performance on small datasets. Shi et al. (75) expanded the application range of radiomics by using LASSO regression to select multi-region radiomics features and then using logistic regression to differentiate brain metastases from lung adenocarcinoma and breast cancer origins. They also predicted epidermal growth factor receptor (EGFR) mutations and human epidermal growth factor receptor 2 (HER2) status, providing new insights for personalized treatment of brain metastasis patients. Mahmoodifar et al. (76) focused on the spatial distribution features of brain metastases. They used principal component analysis (PCA) to reduce the dimensionality of the spatial coordinates of brain metastases and combined age, target volume, and gender information with random forests, SVM, and TabNet deep learning models to differentiate brain metastases from five different primary cancer types. The SVM algorithm achieved an accuracy of 97%, and the TabNet model reached 96%.

These studies demonstrate that texture and radiomic features extracted from MRI or CT images, combined with appropriate machine learning models (68, 77), can effectively differentiate brain metastases from different primary tumors and predict relevant molecular marker statuses (75), providing new tools and strategies for the diagnosis, differential diagnosis, and personalized treatment of brain metastases. Compared to traditional machine learning methods, convolutional neural network-based deep learning models can automatically learn complex features in images without manual design or extraction of texture features, thus improving classification efficiency. For example, CNN models like EfficientNet (67, 68) and ResNet (68, 7880) have achieved remarkable results in differentiating brain metastases from small cell lung cancer and non-small cell lung cancer, with accuracies reaching over 90%. Additionally, the application of 3D residual networks (3D-ResNet), combined with attention mechanisms, has further enhanced the model’s ability to capture key information, thus improving classification accuracy. For example, in a study (78), the use of a 3D-ResNet model for analyzing multi-sequence MRI data successfully increased the classification accuracy of small cell lung cancer versus non-small cell lung cancer brain metastasis from 85 to 92%.

3.3 Classification of radiation necrosis and tumor recurrence

Radiation necrosis (RN) represents a significant late complication of SRS, with an incidence rate of 2.5–24%, predominantly occurring within 2 years post-treatment (8183). When brain metastasis patients demonstrate new enhancing lesions on MRI after SRS treatment, differentiation between radiation necrosis and recurrent brain metastases becomes essential (Figure 7). Patients with radiation necrosis should avoid further radiotherapy to prevent exacerbation of necrosis, selecting non-invasive pharmacological treatment based on symptom severity or, when necessary, undergoing craniotomy to remove necrotic tissue. Conversely, tumor recurrence requires continued aggressive anti-tumor therapy, with options including repeated stereotactic radiosurgery or surgical resection. However, existing research indicates that conventional MRI alone typically cannot reliably distinguish between post-radiation radiation necrosis and recurrent tumors (84), presenting a challenge for clinical decision-making. Although biopsy with histopathological evaluation remains the gold standard for differential diagnosis, stereotactic biopsy may encounter sampling bias in mixed lesions containing both post-radiation radiation necrosis and recurrent tumors, making it difficult to obtain representative tissue samples (83). Furthermore, tissue biopsy not only carries inherent surgical risks, including complications such as hemorrhage, infection, and neurological function impairment, but also cannot be repeatedly performed as a routine monitoring method, significantly limiting its application value in dynamic assessment. Therefore, developing cost-effective non-invasive imaging diagnostic methods with high sensitivity and specificity holds significant clinical value for accurately differentiating between post-radiation radiation necrosis and recurrent tumors, as well as guiding individualized treatment decisions.

Figure 7
Two panels compare tumor progression and radiation necrosis using MRI scans. Panel A shows four images: (a) and (b) MRI scans with white arrows indicating a lesion, (c) a color-coded perfusion map, and (d) a histological pink-stained section. Panel B similarly presents (a) and (b) MRI scans with arrows, (c) a color-coded perfusion map, and (d) a pink-stained histological section. Arrows indicate key areas in each image.

Figure 7. Tumor progression images and radio-necrosis images. (A) Typical tumor progression. (B) Typical radiation necrosis. Referenced and reproduced with permission from Kim et al. (87).

After brain tumor patients receive radiotherapy, new enhanced lesions often appear on magnetic resonance imaging (MRI), which could be either tumor recurrence or benign radiation necrosis. These two conditions often appear similar in imaging features (Figure 7), making differentiation a challenging task. Accurate differentiation is crucial for formulating subsequent treatment plans and improving patient prognosis. Traditional MRI sequences, such as T1-weighted imaging, T2-weighted imaging, and fluid-attenuated inversion recovery (FLAIR), form the basis for differential diagnosis. By observing the signal characteristics of lesions across different sequences, such as T1/T2 signal differences and lesion morphology, an initial judgment can be made regarding the nature of the lesion. However, these traditional MRI sequences often suffer from low sensitivity, making it difficult to reliably differentiate between tumor recurrence and radiation necrosis on their own (84). To improve diagnostic accuracy, various advanced artificial intelligence (AI) techniques have been introduced into clinical practice in recent years (Supplementary Table S5).

Larroza et al. (85) extracted 179 texture features, used recursive feature elimination with support vector machines (SVM) to select 10 important features, and then built a classification model with an SVM classifier. The results showed that the model achieved an area under the curve (AUC) of 0.94 ± 0.07 on the test set, demonstrating the potential of image texture-based analysis for distinguishing brain metastasis and radiation necrosis. Radiomics analysis has started to focus on the extraction and application of texture features in multiparametric MRI (such as T1c, T2, FLAIR, etc.) (86). For example, Tiwari et al. (86) utilized radiomic features extracted from multiparametric MRI and applied an SVM classifier to differentiate brain radiation necrosis and recurrent brain tumors, with FLAIR sequence achieving the highest AUC of 0.79. This suggests that combining multimodal imaging information can further improve diagnostic accuracy. Furthermore, Kim et al. (87) extracted radiomic features from magnetic susceptibility-weighted imaging and dynamic susceptibility contrast-enhanced perfusion imaging, and used logistic regression models to identify the best predictive factors for distinguishing recurrence and radiation necrosis. Their selected two predictive factors achieved 71.9% sensitivity, 100% specificity, and 82.3% accuracy. Yoon et al. (88) used volumetric weighted voxel-based multiparametric clustering to analyze parameters such as ADC, nCBV, and IAUC, achieving an AUC of 0.942–0.946. Zhang et al. (89) extracted 285 radiomic features from T1, T1 enhanced, T2, and FLAIR sequences, and used the RUSBoost ensemble classifier to construct a model with a prediction accuracy of 73.2%. Peng et al. (90) employed 3D texture analysis and a random forest classifier, achieving higher classification accuracy (AUC >0.9). Their study found that 3D texture features were more suitable for differentiating brain metastases from lung cancer compared to breast cancer and melanoma, and random forests performed better with fewer features. This study also provided a potential non-invasive diagnostic tool for brain metastasis patients of unknown primary origin. Chen et al. (91) extracted multiparametric radiomics features and used random forest algorithms to construct a classification model, achieving an AUC of 0.77 in the training cohort and 0.71 in the validation cohort. Salari et al. (92) extracted radiomic features from MR contrast-enhanced T1-weighted images and used random forest algorithms, achieving an AUC of 0.910 ± 0.047. Basree et al. (17) analyzed radiomic features from T1 enhanced, T2, and FLAIR sequences and used logistic regression models for prediction, achieving an AUC of 0.76 ± 0.13. Zhao et al. (93) extracted image features from 3D MRI scans, collected 7 clinical and 7 genomic features, and fused them using position encoding in a heavy ball neural ordinary differential equations (HBNODE) model to predict radiation necrosis or recurrence after SRS for BM, achieving an ROC AUC of 0.88 ± 0.04, sensitivity of 0.79 ± 0.02, specificity of 0.89 ± 0.01, and accuracy of 0.84 ± 0.01.

Although deep learning has made significant progress in medical image analysis, it has not yet been widely applied to directly differentiate radiation necrosis from recurrent tumors after radiotherapy. Current research primarily focuses on extracting radiomic features from images and constructing classifiers using classical machine learning methods. There is currently a lack of studies using deep learning methods, such as convolutional neural networks (CNNs), to distinguish between radiation necrosis and tumor recurrence after radiotherapy. This may be closely related to the dependence of deep learning models on large annotated datasets. Since cases of radiation necrosis and recurrence are relatively few, there is a shortage of training samples, which is one of the major factors limiting the performance of deep learning models. Furthermore, the problem of data imbalance exacerbates this challenge. Radiation necrosis cases are often far fewer than recurrence cases, leading to model bias towards the majority class during training, which weakens the model’s ability to recognize the minority class. This imbalance is particularly pronounced in tasks that require high precision to distinguish between two similar pathological states, significantly affecting the model’s classification performance. At the same time, acquiring high-quality annotations is also challenging. Annotating medical images requires in-depth expertise and relies on annotators’ extensive clinical experience. However, subjective differences between different doctors and inconsistencies in annotations by the same doctor at different time points can introduce noise into the data, adversely affecting the model’s training outcomes. These factors together limit the widespread application of deep learning in distinguishing radiation necrosis from recurrent tumors.

However, deep learning algorithms have the ability to automatically learn complex features from medical images, eliminating the need for manual feature extraction. In practical applications, deep learning models shorten diagnostic cycles and improve efficiency through fully automated processes. Additionally, deep learning models exhibit strong adaptability and robustness, being able to handle imaging data from different modalities and resolutions. This demonstrates the vast potential for deep learning in distinguishing radiation necrosis from recurrent tumors. Despite challenges such as limited data availability, data distribution imbalance, and difficulty in acquiring high-quality annotations, targeted and effective solutions are gradually emerging through further research and practical exploration. Regarding sample size expansion, data augmentation techniques (20, 94, 95) can generate new samples with similar distributions to the original data by performing transformations such as rotation, scaling, and cropping, effectively expanding the training dataset. To address the data imbalance issue, resampling techniques such as random oversampling, undersampling, and the SMOTE (synthetic minority over-sampling technique) algorithm (96) can adjust the sample proportion of different categories in the dataset, enabling the model to focus more on the minority class samples during training and improving its recognition ability for the minority class. Additionally, to solve the high-quality annotation issue, establishing standardized annotation processes and multi-expert consensus mechanisms is key. By setting detailed annotation guidelines and conducting cross-validation and annotation review with multiple experienced medical experts, the subjective differences and inconsistencies during annotation can be effectively minimized, thereby improving the quality and reliability of annotated data.

4 Challenges and future directions

In brain metastasis research, the application of machine learning has made significant progress, but there are still challenges in tasks such as detection, segmentation and classification, including issues such as small sample sizes, insufficient model generalization ability, and multimodal data integration. To address these challenges, researchers have actively explored various solutions. For instance, to overcome data limitations, techniques such as data augmentation (20, 94, 95), dense overlapping stitching (95), and transfer learning (67, 68) have been widely used. To improve model generalization ability, researchers have focused on domain generalization (38, 94), multi-center dataset training (97), and adaptive network architectures (13, 39, 98). Methods such as multi-channel input and feature fusion (99) have been used to integrate complementary information from multimodal MRI images. For specific tasks, researchers have also developed corresponding strategies. For example, in brain metastasis segmentation, methods such as asymmetric structures (21), multi-scale feature fusion (99), improved loss functions (36), and overlapping patch techniques (100) have been used to improve the sensitivity of small lesion detection. In the differentiation between GBM and BM, brain metastasis and radiation necrosis, researchers have not only focused on integrating multimodal imaging data (51, 54, 55, 60, 63, 84, 87, 101), but have also explored more detailed tumor sub-region analysis (14, 54, 58, 60, 61, 65) and integration of clinical information (57, 59, 102) to improve diagnostic accuracy.

However, current research still has several limitations (Supplementary Table S6). For example, although CT images play a key role in the early screening of brain metastasis, most current studies focus on MRI images, neglecting the potential applications of CT images in brain metastasis segmentation and classification tasks. Additionally, most of the existing studies have small sample sizes and lack multi-center validation, which limits the model’s generalization ability and clinical application value. Furthermore, the interpretability of deep learning models still needs improvement, and enhancing the transparency and trustworthiness of models will help integrate them more effectively into clinical workflows.

4.1 The gap between CT and MRI in brain metastasis image analysis

Deep learning has made significant progress in brain metastasis MRI image analysis, but incorporating CT images into the analysis pipeline holds important clinical significance and research value. First, CT examinations are more widespread and economical, especially in developing countries and primary healthcare settings, making CT a more accessible diagnostic tool. It is also more patient-friendly for individuals who are immobile or unable to tolerate long MRI scans. Additionally, CT images serve as the standard imaging basis for radiotherapy planning. Integrating CT images into brain metastasis segmentation and classification tasks can better assist in delineating radiotherapy target areas and dose calculation, improving the precision and safety of radiotherapy.

Although CT images are less commonly used in brain metastasis image segmentation and classification, some studies have explored this. In brain metastasis image segmentation, Wang et al. (103) constructed an improved U-Net architecture with a position attention module (PAM) to automatically segment the gross tumor volume (GTV) from CT simulation images of brain metastasis patients. This model demonstrated excellent performance in external independent validation sets, though its generalization ability needs further validation. Wang et al. (104) further innovated by combining GAN, Mask R-CNN, and CRF optimization to construct a deep learning model for automatic segmentation of GTV in brain metastasis from CT simulation images. The model demonstrated good generalization ability on both internal and external validation datasets, providing an effective technical approach for brain metastasis image segmentation. However, despite advances in CT image segmentation technology in brain metastasis diagnosis, its performance still lags behind MRI and requires further optimization.

In brain metastasis image classification, existing research has attempted to use CT radiomics features and deep learning models. For example, Li et al. (105) used CT radiomics features from lung cancer patients to predict brain metastasis, achieving good diagnostic performance (AUC = 0.81). Zhang et al. (106) constructed a stacked ensemble model for classifying tumor volume (GTV), brainstem, and normal brain tissue in brain metastasis CT images, outperforming individual base models (AUC = 0.928, 0.932, and 0.942, respectively). Gong et al. (107) proposed a deep learning model combined with CT radiomics features to predict the risk of brain metastasis in non-small cell lung cancer patients within 3 years. Their ensemble learning model showed good predictive efficacy on both training and validation sets (AUC between 0.85–0.91). While CT images have been applied in brain metastasis classification, the lower image clarity and resolution compared to MRI make it more challenging to distinguish brain metastasis from normal tissue in CT images. As a result, models trained on CT images typically perform worse in feature extraction, classification accuracy, and generalization ability compared to models trained on MRI images, limiting the depth and breadth of research in brain metastasis CT image classification. However, globally, especially in developing countries and primary healthcare settings, CT remains an important diagnostic tool due to its higher prevalence, lower cost, ease of access, and greater convenience for patients unable to tolerate long MRI scans. Therefore, CT continues to play a crucial role in brain metastasis diagnosis and related research, prompting researchers to address the limitations of CT images and improve the performance of models based on CT images.

In future research, considering the differences between CT and MRI in imaging principles and clinical application advantages, and recognizing that they cannot replace each other, it may be valuable to combine both modalities to more comprehensively assess brain metastasis characteristics. Exploring deep learning models based on fused CT and MRI images, such as developing automatic brain metastasis segmentation models or classification models, could improve segmentation and classification accuracy, leading to more precise treatment planning.

4.2 The conflict between AI model generalization and patient privacy protection

Deep learning models show great potential in the diagnosis and treatment of brain metastasis, offering innovative solutions and breakthrough possibilities in this field. However, a key factor limiting the widespread clinical application of deep learning models is the lack of sufficient external validation. This issue leads to insufficient model generalization, making it difficult for the models to adapt to the complex and dynamic clinical scenarios.

Insufficient model generalization is a common issue in medical imaging research. In brain metastasis segmentation research, some studies lack external validation on independent test sets, rely only on single-center data, or lack multi-center data for external validation. Some studies also use multimodal MRI data and cascaded networks, but with small training datasets from single institutions, making it difficult to adapt to different hospital scanning technologies and hardware differences, limiting the generalization ability of the models and potentially leading to performance degradation in real-world applications. Similar issues arise in the differential diagnosis of GBM and BM, identification of brain metastasis sources, and the differentiation of radiation necrosis and tumor recurrence post-radiotherapy. Many studies lack external dataset validation, making it difficult to ensure the models’ effectiveness in diverse environments. Some studies also suffer from small sample sizes and focus only on limited tumor types, resulting in poor model generalization ability. Furthermore, some studies also face the combined challenges of small sample sizes, lack of external independent validation, and pathology diagnosis verification, reducing the reliability of the results and severely limiting the model’s generalization ability, making it difficult to apply in broader clinical settings. The limitations in model generalization performance are not challenges unique to brain metastasis segmentation tasks. The medical image analysis field has addressed similar issues through establishing large-scale clinical validation datasets via multi-institutional collaborations, while simultaneously utilizing these standardized datasets to provide unified accuracy assessment metrics (such as sensitivity, specificity, Dice coefficient, etc.), thereby enabling direct performance comparison and objective evaluation between different algorithms. The primary brain tumor segmentation validation framework represented by the brain tumor segmentation (BRATS) challenge has thoroughly validated the effectiveness of this multi-center data-driven approach in enhancing algorithm clinical translation capabilities, providing a successful paradigm that can be referenced for brain metastasis image analysis tasks (18).

To promote the clinical application of machine learning technologies in brain metastasis segmentation and classification, validation is needed on larger, more diverse clinical datasets to assess the models’ reliability and effectiveness. However, constructing large-scale, diverse brain metastasis datasets also presents challenges, especially in head and neck imaging data. Unlike imaging data from other parts of the body, head and neck images contain a significant amount of facial information, which is highly identifiable and reconstructible, and direct public use could lead to patient privacy breaches. Therefore, when building public datasets, strict anonymization processes, such as face blurring or de-identification, are necessary to ensure patient privacy. This is one of the reasons why head and neck tumor imaging data in public databases like TCIA are difficult to share openly.

However, while strict anonymization can address privacy concerns to some extent, the variability of data from different hospitals introduces new challenges. Differences in scanners, imaging parameters, and patient populations at different hospitals can make generalization ability even more crucial for clinical applications. To improve model generalization, domain adaptation/domain generalization techniques (108) can be used to overcome distribution differences between datasets, for example, by learning common features across domains or regularizing the model to enhance its robustness to different data distributions. Additionally, federated learning techniques (109) can be used to train models on multi-center data while protecting patient privacy. For example, Jiménez-Sánchez et al. (110) proposed a federated learning method combining curriculum learning and unsupervised domain adaptation, which achieved significant results in classification performance (AUC 0.79, PR-AUC 0.82, far surpassing traditional methods) and domain adaptation in breast cancer classification. Feng et al. (111) built a robust federated learning model (RFLM) using multi-center preoperative CT imaging data of gastric cancer patients, outperforming clinical models and other federated learning algorithms in predicting post-surgery recurrence risk. Federated learning methods, including horizontal, vertical, and federated transfer learning, can be selected based on specific situations. Horizontal federated learning is suitable for cases where participants have similar features but different samples, such as brain metastasis patient data from different hospitals. Vertical federated learning is suitable when participants have the same samples but different features, such as data from different departments within the same hospital. Federated transfer learning is applicable when participants have both different samples and features. Additionally, data heterogeneity, communication efficiency, and privacy concerns must be considered. Techniques such as differential privacy and homomorphic encryption can further enhance privacy protection in federated learning.

For brain metastasis diagnosis and treatment, federated learning can be used to integrate data from multiple medical institutions, thereby training deep learning models with better generalization ability. For example, a federated learning network involving multiple hospitals can be built to collaboratively train a brain metastasis segmentation model using each hospital’s imaging data, without sharing raw patient image data, effectively protecting patient privacy.

However, despite federated learning demonstrating enormous potential at the technical level, it still faces complex administrative coordination and policy regulation challenges in practical applications, which perhaps explains why most current large-scale medical imaging databases tend to adopt the traditional model of multi-source anonymized data integration. In the future, federated learning technology needs continuous optimization in algorithm robustness, privacy protection mechanisms, and heterogeneous data processing capabilities to better adapt to the practical requirements of complex medical image analysis tasks such as brain metastases.

4.3 AI model interpretability and clinical trust challenges

Although deep learning models have achieved excellent performance in brain metastases detection, segmentation, and classification tasks, their “black box” characteristics severely constrain clinical translation applications. AI interpretability challenges in brain metastases diagnosis are particularly prominent, as clinicians need to understand how AI distinguishes microscopic lesions smaller than 3 mm from vascular artifacts, the basis for determining lesion boundaries, and the prioritization logic in cases with multiple lesions. This lack of decision transparency directly affects physicians’ trust in AI systems, becoming a critical barrier to clinical adoption.

Current interpretability methods exhibit obvious limitations in brain metastases applications. Although Adnan et al. (112) employed Grad-CAM technology to visualize model attention regions, with their NASNet large model achieving 92.98% accuracy while providing clear localization, the interpretation granularity is coarse and difficult to meet precise diagnostic requirements. Chen et al. (113) used SHAP methods to analyze 10 mm brain-tumor interface features, with their logistic regression model achieving an AUC of 0.808 and quantifying feature contribution values, but the computational complexity of high-dimensional image processing limits real-time applications. The integrated gradients method by Sayres et al. (114) improved physician sensitivity from 79.4 to 88.7%, yet simultaneously exposed the double-edged effect of interpretability—potentially increasing misdiagnosis risk for patients without lesions. These studies indicate that existing interpretability techniques lack optimization design specifically for brain metastases tasks.

The impact of interpretability deficiency has transcended the technical level, becoming a significant barrier for AI systems to obtain regulatory approval, hospital procurement decisions, and clinical workflow integration. In clinical practice, radiologists’ acceptance of AI recommendations highly depends on their understanding of decision logic, particularly when handling complex cases or formulating treatment plans. Current brain metastases AI research generally treats interpretability as an additional feature rather than a core requirement, leading to a disconnect between technological development and clinical needs.

Future brain metastases AI systems should construct multi-level, personalized interpretability frameworks. At the technical level, comprehensive solutions integrating LIME local interpretation, Grad-CAM global visualization, and uncertainty estimation are needed, incorporating brain anatomical prior knowledge and radiomics semantic features. At the clinical level, stratified interpretation interfaces should be designed for physicians with different experience levels, providing detailed educational explanations for residents and key feature summaries for senior physicians. At the system level, standardized metrics for interpretability evaluation and multi-center validation mechanisms need to be established to ensure clinical effectiveness and safety of interpretation methods. More importantly, deep integration of interpretable AI with clinical decision support systems should be promoted, constructing a fully transparent diagnostic and treatment system from image analysis to treatment recommendations, truly achieving collaborative development between AI technology and clinical practice.

4.4 Lack of clinical practice translation and reader studies

Although deep learning models have shown significant potential in the diagnosis and treatment of brain metastases, their clinical translation still faces numerous challenges. For instance, while the U-Net architecture and its improved models have achieved small-scale clinical applications in brain metastasis image segmentation, they still face significant limitations in terms of precision for small lesion detection and generalization ability, particularly when adapting to different scanning devices and MRI sequences (20, 21). These technical limitations severely restrict large-scale clinical translation and application. Similarly, in brain metastasis image classification tasks, machine learning-based models for the differentiation of glioblastoma (GBM) and brain metastasis (BM) show high diagnostic accuracy in internal validation, but their clinical application remains significantly limited. The main reason is that these models are often based on single-center retrospective studies, lacking multi-center external validation, leading to concerns about their reliability across different medical institutions and patient populations (51, 53, 54). Furthermore, while studies combining radiomics and machine learning have made some progress in differentiating brain metastasis subtypes, the lack of standardized feature selection and model optimization processes has resulted in poor reproducibility and consistency between studies, severely affecting the clinical deployment value of these models (69, 70, 96).

To better serve clinical practice, AI research should establish a standardized validation process system comprising three levels: technical validation, clinical validation, and implementation validation. The technical validation phase should adopt multi-dimensional assessment metrics including Dice coefficient, sensitivity, specificity, and Hausdorff distance, while introducing clinical relevance evaluation. Clinical validation requires designing prospective multi-center randomized controlled reading studies (110), objectively evaluating the impact of AI systems on diagnostic accuracy, reading time, and clinical decision-making by randomly assigning radiologists of different experience levels to AI-assisted groups and control groups. The key is to establish unified MRI scanning parameter standardization protocols, including technical specifications such as contrast agent injection timing for T1c sequences and slice thickness settings, as well as image quality control standards, ensuring consistency and comparability of multi-center research data.

To achieve genuine clinical application of AI systems, key issues such as technical integration and regulatory compliance must be addressed. In PACS system integration, interface design based on DICOM standards should be adopted, developing structured report formats compliant with DICOM-SR standards to achieve seamless storage and retrieval of AI analysis results in PACS (115). Through asynchronous processing modes and automatic triggering mechanisms, the system should be capable of automatically processing newly uploaded brain MRI examinations without affecting normal hospital workflow. In regulatory compliance, quality management systems meeting the requirements of regulatory agencies such as FDA and NMPA must be established, including complete software lifecycle management, risk control measures, and continuous performance monitoring mechanisms. AI system performance dashboards should be established to monitor key indicators such as processing time and accuracy in real-time, automatically alerting and initiating emergency responses when performance deviates from preset thresholds. Additionally, improving clinical physicians’ acceptance is equally critical. Training programs should be designed for medical personnel at different levels, helping clinicians understand the advantages and limitations of AI systems through case analysis and practical exercises, and establishing user feedback collection mechanisms to continuously optimize system functionality.

In reader studies, existing research exhibits obvious inadequacies. Although studies indicate that deep learning-assisted systems (BMSS) can significantly improve the accuracy and efficiency of brain metastases delineation, particularly with more pronounced effects for less experienced residents (116), these studies are mostly limited to single-center small-sample data, resulting in lack of generalizability. Compared to fields such as breast cancer detection and lung cancer detection (117, 118), reader studies for AI in brain metastases segmentation and classification tasks are relatively lacking. Current research focuses more on technical-level algorithm optimization and performance improvement, with less involvement in evaluating radiologists’ performance when using AI tools in actual clinical practice. Future research should pay more attention to radiologists’ performance when using AI-assisted systems, particularly the differences among physicians of different experience levels when using AI tools. It is recommended to design multi-center, multi-level reader studies to evaluate AI tool performance in different clinical scenarios and explore their potential value in training young physicians, better guiding the practical application of AI in clinical settings.

5 Conclusion

Artificial intelligence technologies, including classical machine learning and deep learning, have shown enormous potential in the diagnosis and treatment of brain metastases. From precise tumor segmentation to complex classification tasks, AI technologies provide new tools to improve diagnostic accuracy and efficiency. Deep learning models such as U-Net and DeepMedic have achieved significant results in brain metastasis detection and segmentation tasks, while machine learning and deep learning methods have also been successfully applied to differentiate brain metastases from glioblastoma, identify primary sources of brain metastases, and distinguish radiation necrosis from tumor recurrence post-radiotherapy. Although AI has made promising progress in brain metastasis image analysis, further research is still needed to overcome existing challenges, such as improving model interpretability and generalization ability, building large-scale high-quality clinical datasets, developing user-friendly software tools, and conducting rigorous clinical trials. With continued technological advancements and deeper clinical application, AI technologies are expected to make greater contributions to the precision diagnosis and prognosis improvement of brain metastases.

Author contributions

YH: Writing – original draft. CG: Writing – original draft. YW: Writing – original draft. ZW: Writing – original draft. CY: Writing – original draft. HD: Writing – original draft. SC: Writing – original draft. YLi: Writing – original draft. HP: Writing – review & editing. PZ: Writing – review & editing. BL: Writing – review & editing. YLu: Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was funded by the National College Student Innovation and Entrepreneurship Training Program (Nos. S202410632165X, 2024464, 2024391, and 2024380), Xuyong County People’s Hospital-Southwest Medical University Science and Technology Strategic Cooperation Program (No. 2024XYXNYD09).

Acknowledgments

Figure 1 created with biorender.com.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Gen AI was used in the creation of this manuscript. ChatGPT was used for English language polish in this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fneur.2025.1581422/full#supplementary-material

References

1. Boire, A, Brastianos, PK, Garzia, L, and Valiente, M. Brain metastasis. Nat Rev Cancer. (2020) 20:4–11. doi: 10.1038/s41568-019-0220-y

PubMed Abstract | Crossref Full Text | Google Scholar

2. Charron, O, Lallement, A, Jarnet, D, Noblet, V, Clavier, JB, and Meyer, P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med. (2018) 95:43–54. doi: 10.1016/j.compbiomed.2018.02.004

PubMed Abstract | Crossref Full Text | Google Scholar

3. Miccio, JA, Tian, Z, Mahase, SS, Lin, C, Choi, S, Zacharia, BE, et al. Estimating the risk of brain metastasis for patients newly diagnosed with cancer. Commun Med. (2024) 4:27. doi: 10.1038/s43856-024-00445-7

PubMed Abstract | Crossref Full Text | Google Scholar

4. Achrol, AS, Rennert, RC, Anders, C, Soffietti, R, Ahluwalia, MS, Nayak, L, et al. Brain metastases. Nat Rev Dis Primers. (2019) 5:5. doi: 10.1038/s41572-018-0055-y

PubMed Abstract | Crossref Full Text | Google Scholar

5. Ostrom, QT, Wright, CH, and Barnholtz-Sloan, JS. Brain metastases: epidemiology. Handb Clin Neurol. (2018) 149:27–42. doi: 10.1016/B978-0-12-811161-1.00002-5

PubMed Abstract | Crossref Full Text | Google Scholar

6. Tanzhu, G, Chen, L, Ning, J, Xue, W, Wang, C, Xiao, G, et al. Metastatic brain tumors: from development to cutting-edge treatment. Med Comm. (2025) 6:e70020. doi: 10.1002/mco2.70020

PubMed Abstract | Crossref Full Text | Google Scholar

7. Brindle, KM, Izquierdo-García, JL, Lewis, DY, Mair, RJ, and Wright, AJ. Brain tumor imaging. J Clin Oncol. (2017) 35:2432–8. doi: 10.1200/JCO.2017.72.7636

PubMed Abstract | Crossref Full Text | Google Scholar

8. Kaufmann, TJ, Smits, M, Boxerman, J, Huang, R, Barboriak, DP, Weller, M, et al. Consensus recommendations for a standardized brain tumor imaging protocol for clinical trials in brain metastases. Neuro-Oncol. (2020) 22:757–72. doi: 10.1093/neuonc/noaa030

PubMed Abstract | Crossref Full Text | Google Scholar

9. Pope, WB. Brain metastases: neuroimaging. Handb Clin Neurol. (2018) 149:89–112. doi: 10.1016/B978-0-12-811161-1.00007-4

PubMed Abstract | Crossref Full Text | Google Scholar

10. Fink, JR, Muzi, M, Peck, M, and Krohn, KA. Continuing education: multi-modality brain tumor imaging—MRI, PET, and PET/MRI. J Nucl Med. (2015) 56:1554–61. doi: 10.2967/jnumed.113.131516

Crossref Full Text | Google Scholar

11. Huang, Y, Bert, C, Sommer, P, Frey, B, Gaipl, U, Distel, LV, et al. Deep learning for brain metastasis detection and segmentation in longitudinal MRI data. Med Phys. (2022) 49:5773–86. doi: 10.1002/mp.15863

PubMed Abstract | Crossref Full Text | Google Scholar

12. Grøvik, E, Yi, D, Iv, M, Tong, E, Rubin, D, and Zaharchuk, G. Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J Magn Reson Imaging. (2020) 51:175–82. doi: 10.1002/jmri.26766

PubMed Abstract | Crossref Full Text | Google Scholar

13. Yoo, Y, Gibson, E, Zhao, G, Re, TJ, Parmar, H, Das, J, et al. Extended nnU-Net for brain metastasis detection and segmentation in contrast-enhanced magnetic resonance imaging with a large multi-institutional data set. Int J Radiat Oncol Biol Phys. (2025) 121:241–9. doi: 10.1016/j.ijrobp.2024.07.2318

PubMed Abstract | Crossref Full Text | Google Scholar

14. Bathla, G, Dhruba, DD, Liu, Y, Le, NH, Soni, N, Zhang, H, et al. Differentiation between glioblastoma and metastatic disease on conventional MRI imaging using 3D-convolutional neural networks: model development and validation. Acad Radiol. (2024) 31:2041–9. doi: 10.1016/j.acra.2023.10.044

PubMed Abstract | Crossref Full Text | Google Scholar

15. Taillibert, S, and Le Rhun, É. Epidemiology of brain metastases. Cancer Radiother. (2015) 19:3–9. doi: 10.1016/j.canrad.2014.11.001

PubMed Abstract | Crossref Full Text | Google Scholar

16. Ortiz-Ramón, R, Larroza, A, Ruiz-España, S, Arana, E, and Moratal, D. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study. Eur Radiol. (2018) 28:4514–23. doi: 10.1007/s00330-018-5463-6

PubMed Abstract | Crossref Full Text | Google Scholar

17. Basree, MM, Li, C, Um, H, Bui, AH, Liu, M, Ahmed, A, et al. Leveraging radiomics and machine learning to differentiate radiation necrosis from recurrence in patients with brain metastases. J Neuro-Oncol. (2024) 168:307–16. doi: 10.1007/s11060-024-04669-4

Crossref Full Text | Google Scholar

18. Moawad, AW, Janas, A, Baid, U, Ramakrishnan, D, Saluja, R, Ashraf, N, et al. (2024). The brain tumor segmentation-metastases (BraTS-METS) challenge 2023: brain metastasis segmentation on pre-treatment MRI. arXiv. Available online at: https://doi.org/10.48550/arXiv.2306.00838. [Epub ahead of preprint]

Google Scholar

19. Eisenhauer, EA, Therasse, P, Bogaerts, J, Schwartz, LH, Sargent, D, Ford, R, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. (2009) 45:228–47. doi: 10.1016/j.ejca.2008.10.026

PubMed Abstract | Crossref Full Text | Google Scholar

20. Bousabarah, K, Ruge, M, Brand, JS, Hoevels, M, Rueß, D, Borggrefe, J, et al. Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data. Radiat Oncol. (2020) 15:87. doi: 10.1186/s13014-020-01514-6

PubMed Abstract | Crossref Full Text | Google Scholar

21. Cao, Y, Vassantachart, A, Jason, CY, Yu, C, Ruan, D, Sheng, K, et al. Automatic detection and segmentation of multiple brain metastases on magnetic resonance image using asymmetric UNet architecture. Phys Med Biol. (2021) 66:015003. doi: 10.1088/1361-6560/abca53

Crossref Full Text | Google Scholar

22. Rudie, JD, Weiss, DA, Colby, JB, Rauschecker, AM, Laguna, B, Braunstein, S, et al. Three-dimensional U-Net convolutional neural network for detection and segmentation of intracranial metastases. Radiol Artif Intell. (2021) 3:e200204. doi: 10.1148/ryai.2021200204

PubMed Abstract | Crossref Full Text | Google Scholar

23. Guo, Y, Wang, Y, Meng, K, and Zhu, Z. Otsu multi-threshold image segmentation based on adaptive double-mutation differential evolution. Biomimetics. (2023) 8:418. doi: 10.3390/biomimetics8050418

PubMed Abstract | Crossref Full Text | Google Scholar

24. Khorshidi, A. Tumor segmentation via enhanced area growth algorithm for lung CT images. BMC Med Imaging. (2023) 23:189. doi: 10.1186/s12880-023-01126-y

PubMed Abstract | Crossref Full Text | Google Scholar

25. Speybroeck, N. Classification and regression trees. Int J Public Health. (2012) 57:243–6. doi: 10.1007/s00038-011-0315-z

PubMed Abstract | Crossref Full Text | Google Scholar

26. Becker, T, Rousseau, AJ, Geubbelmans, M, Burzykowski, T, and Valkenborg, D. Decision trees and random forests. Am J Orthod Dentofacial Orthop. (2023) 164:894–7. doi: 10.1016/j.ajodo.2023.09.011

PubMed Abstract | Crossref Full Text | Google Scholar

27. Jiang, T, Gradus, JL, and Rosellini, AJ. Supervised machine learning: a brief primer. Behav Ther. (2020) 51:675–87. doi: 10.1016/j.beth.2020.05.002

PubMed Abstract | Crossref Full Text | Google Scholar

28. Theodosiou, AA, and Read, RC. Artificial intelligence, machine learning and deep learning: potential resources for the infection clinician. J Infect. (2023) 87:287–94. doi: 10.1016/j.jinf.2023.07.006

PubMed Abstract | Crossref Full Text | Google Scholar

29. Li, C, Li, W, Liu, C, Zheng, H, Cai, J, and Wang, S. Artificial intelligence in multiparametric magnetic resonance imaging: a review. Med Phys. (2022) 49:e1024–54. doi: 10.1002/mp.15936

Crossref Full Text | Google Scholar

30. Choi, RY, Coyner, AS, Kalpathy-Cramer, J, Chiang, MF, and Campbell, JP. Introduction to machine learning, neural networks, and deep learning. Transl Vis Sci Technol. (2020) 9:14. doi: 10.1167/tvst.9.2.14

PubMed Abstract | Crossref Full Text | Google Scholar

31. Losch, M (2015) Detection and segmentation of brain metastases with deep convolutional networks. Available at: https://www.diva-portal.org/smash/get/diva2:853460/FULLTEXT01.pdf (Accessed February 7, 2025).

Google Scholar

32. Xue, J, Wang, B, Ming, Y, Liu, X, Jiang, Z, Wang, C, et al. Deep learning-based detection and segmentation-assisted management of brain metastases. Neuro-Oncol. (2020) 22:505–14. doi: 10.1093/neuonc/noz234

PubMed Abstract | Crossref Full Text | Google Scholar

33. Dikici, E, Ryu, JL, Demirer, M, Bigelow, M, White, RD, Slone, W, et al. Automated brain metastases detection framework for T1-weighted contrast-enhanced 3D MRI. IEEE J Biomed Health Inform. (2020) 24:2883–93. doi: 10.1109/JBHI.2020.2982103

PubMed Abstract | Crossref Full Text | Google Scholar

34. Qu, J, Zhang, W, Shu, X, Wang, Y, Wang, L, Xu, M, et al. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol. (2023) 33:6648–58. doi: 10.1007/s00330-023-09648-3

PubMed Abstract | Crossref Full Text | Google Scholar

35. Ronneberger, O, Fischer, P, and Brox, T (2015). U-Net: convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. 234–241.

Google Scholar

36. Chartrand, G, Emiliani, RD, Pawlowski, SA, Markel, DA, Bahig, H, Cengarle-Samak, A, et al. Automated detection of brain metastases on T1-weighted MRI using a convolutional neural network: impact of volume aware loss and sampling strategy. J Magn Reson Imaging. (2022) 56:1885–98. doi: 10.1002/jmri.28274

PubMed Abstract | Crossref Full Text | Google Scholar

37. Yoo, Y, Ceccaldi, P, Liu, S, Re, TJ, Cao, Y, Balter, JM, et al. Evaluating deep learning methods in detecting and segmenting different sizes of brain metastases on 3D post-contrast T1-weighted images. J Med Imaging. (2021) 8:037001. doi: 10.1117/1.JMI.8.3.037001

PubMed Abstract | Crossref Full Text | Google Scholar

38. Liew, A, Lee, CC, Subramaniam, V, Lan, BL, and Tan, M. Gradual self-training via confidence and volume based domain adaptation for multi dataset deep learning-based brain metastases detection using nonlocal networks on MRI images. J Magn Reson Imaging. (2023) 57:1728–40. doi: 10.1002/jmri.28456

PubMed Abstract | Crossref Full Text | Google Scholar

39. Pflüger, I, Wald, T, Isensee, F, Schell, M, Meredig, H, Schlamp, K, et al. Automated detection and quantification of brain metastases on clinical MRI data using artificial neural networks. Neuro-Oncol Adv. (2022) 4:vdac138. doi: 10.1093/noajnl/vdac138

PubMed Abstract | Crossref Full Text | Google Scholar

40. Phan, TH, and Yamamoto, K. (2020). Resolving class imbalance in object detection with weighted cross entropy losses. arXiv. Available online at: https://doi.org/10.48550/arXiv.2006.01413. [Epub ahead of preprint]

Google Scholar

41. Shrivastava, A, Gupta, A, and Girshick, R. (2016). Training region-based object detectors with online hard example mining. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 761–769.

Google Scholar

42. Shi, S, Fang, Q, Xu, X, and Zhao, T. (2024). Similarity distance-based label assignment for tiny object detection. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 13711–13718

Google Scholar

43. Guo, S, Chen, Q, Wang, L, Wang, L, and Zhu, Y. nnUnetFormer: an automatic method based on nnUnet and transformer for brain tumor segmentation with multimodal MR images. Phys Med Biol. (2023) 68:245012. doi: 10.1088/1361-6560/ad0c8d

PubMed Abstract | Crossref Full Text | Google Scholar

44. Kamnitsas, K, Ledig, C, Newcombe, VF, Simpson, JP, Kane, AD, Menon, DK, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. (2017) 36:61–78. doi: 10.1016/j.media.2016.10.004

PubMed Abstract | Crossref Full Text | Google Scholar

45. Liu, Y, Stojadinovic, S, Hrycushko, B, Wardak, Z, Lau, S, Lu, W, et al. A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PLoS One. (2017) 12:e0185844. doi: 10.1371/journal.pone.0185844

PubMed Abstract | Crossref Full Text | Google Scholar

46. Hu, SY, Weng, WH, Lu, SL, Cheng, YH, Xiao, F, Hsu, FM, et al. (2019). Multimodal volume-aware detection and segmentation for brain metastases radiosurgery. Artificial Intelligence in Radiation Therapy. AIRT 2019. 61–69

Google Scholar

47. Jünger, ST, Hoyer, UC, Schaufler, D, Laukamp, KR, Goertz, L, Thiele, F, et al. Fully automated MR detection and segmentation of brain metastases in non-small cell lung cancer using deep learning. J Magn Reson Imaging. (2021) 54:1608–22. doi: 10.1002/jmri.27741

PubMed Abstract | Crossref Full Text | Google Scholar

48. Kikuchi, Y, Togao, O, Kikuchi, K, Momosaka, D, Obara, M, Van Cauteren, M, et al. A deep convolutional neural network-based automatic detection of brain metastases with and without blood vessel suppression. Eur Radiol. (2022) 32:2998–3005. doi: 10.1007/s00330-021-08427-2

PubMed Abstract | Crossref Full Text | Google Scholar

49. Szegedy, C, Liu, W, Jia, Y, Sermanet, P, Reed, S, Anguelov, D, et al. (2015). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1–9.

Google Scholar

50. Anitha, V. Multivariate brain tumor detection in 3D-MRI images using optimised segmentation and unified classification model. J Eval Clin Pract. (2025) 31:e14229. doi: 10.1111/jep.14229

PubMed Abstract | Crossref Full Text | Google Scholar

51. Bae, S, An, C, Ahn, SS, Kim, H, Han, K, Kim, SW, et al. Robust performance of deep learning for distinguishing glioblastoma from single brain metastasis using radiomic features: model development and validation. Sci Rep. (2020) 10:12110. doi: 10.1038/s41598-020-68980-6

PubMed Abstract | Crossref Full Text | Google Scholar

52. Priya, S, Liu, Y, Ward, C, Le, NH, Soni, N, Pillenahalli Maheshwarappa, R, et al. Machine learning based differentiation of glioblastoma from brain metastasis using MRI derived radiomics. Sci Rep. (2021) 11:10478. doi: 10.1038/s41598-021-90032-w

PubMed Abstract | Crossref Full Text | Google Scholar

53. Qian, Z, Li, Y, Wang, Y, Li, L, Li, R, Wang, K, et al. Differentiation of glioblastoma from solitary brain metastases using radiomic machine-learning classifiers. Cancer Lett. (2019) 451:128–35. doi: 10.1016/j.canlet.2019.02.054

PubMed Abstract | Crossref Full Text | Google Scholar

54. Artzi, M, Bressler, I, and Ben Bashat, D. Differentiation between glioblastoma, brain metastasis and subtypes using radiomics analysis. J Magn Reson Imaging. (2019) 50:519–28. doi: 10.1002/jmri.26643

Crossref Full Text | Google Scholar

55. Liu, Y, Li, T, Fan, Z, Li, Y, Sun, Z, Li, S, et al. Image-based differentiation of intracranial metastasis from glioblastoma using automated machine learning. Front Neurosci. (2022) 16:855990. doi: 10.3389/fnins.2022.855990

PubMed Abstract | Crossref Full Text | Google Scholar

56. Bijari, S, Jahanbakhshi, A, Hajishafiezahramini, P, and Abdolmaleki, P. Differentiating glioblastoma multiforme from brain metastases using multidimensional radiomics features derived from MRI and multiple machine learning models. Biomed Res Int. (2022) 2022:2016006. doi: 10.1155/2022/2016006

PubMed Abstract | Crossref Full Text | Google Scholar

57. Huang, Y, Huang, S, and Liu, Z. Multi-task learning-based feature selection and classification models for glioblastoma and solitary brain metastases. Front Oncol. (2022) 12:1000471. doi: 10.3389/fonc.2022.1000471

PubMed Abstract | Crossref Full Text | Google Scholar

58. Parvaze, PS, Bhattacharjee, R, Verma, YK, Singh, RK, Yadav, V, Singh, A, et al. Quantification of radiomics features of peritumoral vasogenic edema extracted from fluid-attenuated inversion recovery images in glioblastoma and isolated brain metastasis, using T1-dynamic contrast-enhanced perfusion analysis. NMR Biomed. (2023) 36:e4884. doi: 10.1002/nbm.4884

PubMed Abstract | Crossref Full Text | Google Scholar

59. Joo, B, Ahn, SS, An, C, Han, K, Choi, D, Kim, H, et al. Fully automated radiomics-based machine learning models for multiclass classification of single brain tumors: glioblastoma, lymphoma, and metastasis. J Neuroradiol. (2023) 50:388–95. doi: 10.1016/j.neurad.2022.11.001

PubMed Abstract | Crossref Full Text | Google Scholar

60. Gao, E, Wang, P, Bai, J, Ma, X, Gao, Y, Qi, J, et al. Radiomics analysis of diffusion kurtosis imaging: distinguishing between glioblastoma and single brain metastasis. Acad Radiol. (2024) 31:1036–43. doi: 10.1016/j.acra.2023.07.023

PubMed Abstract | Crossref Full Text | Google Scholar

61. Chen, Y, Lin, H, Sun, J, Pu, R, Zhou, Y, and Sun, B. Texture feature differentiation of glioblastoma and solitary brain metastases based on tumor and tumor-brain interface. Acad Radiol. (2025) 32:400–10. doi: 10.1016/j.acra.2024.08.025

PubMed Abstract | Crossref Full Text | Google Scholar

62. Chakrabarty, S, Sotiras, A, Milchenko, M, LaMontagne, P, Hileman, M, and Marcus, D. MRI-based identification and classification of major intracranial tumor types by using a 3D convolutional neural network: a retrospective multi-institutional analysis. Radiol Artif Intell. (2021) 3:e200301. doi: 10.1148/ryai.2021200301

PubMed Abstract | Crossref Full Text | Google Scholar

63. Shin, I, Kim, H, Ahn, SS, Sohn, B, Bae, S, Park, JE, et al. Development and validation of a deep learning–based model to distinguish glioblastoma from solitary brain metastasis using conventional MR images. Am J Neuroradiol. (2021) 42:838–44. doi: 10.3174/ajnr.A7003

PubMed Abstract | Crossref Full Text | Google Scholar

64. Yan, Q, Li, F, Cui, Y, Wang, Y, Wang, X, Jia, W, et al. Discrimination between glioblastoma and solitary brain metastasis using conventional MRI and diffusion-weighted imaging based on a deep learning algorithm. J Digit Imaging. (2023) 36:1480–8. doi: 10.1007/s10278-023-00838-5

PubMed Abstract | Crossref Full Text | Google Scholar

65. Xiong, Z, Qiu, J, Liang, Q, Jiang, J, Zhao, K, Chang, H, et al. Deep learning models for rapid discrimination of high-grade gliomas from solitary brain metastases using multi-plane T1-weighted contrast-enhanced (T1CE) images. Quant Imaging Med Surg. (2024) 14:5762–73. doi: 10.21037/qims-24-380

PubMed Abstract | Crossref Full Text | Google Scholar

66. Park, YW, Eom, S, Kim, S, Lim, S, Park, JE, Kim, HS, et al. Differentiation of glioblastoma from solitary brain metastasis using deep ensembles: empirical estimation of uncertainty for clinical reliability. Comput Methods Prog Biomed. (2024) 254:108288. doi: 10.1016/j.cmpb.2024.108288

PubMed Abstract | Crossref Full Text | Google Scholar

67. Grossman, R, Haim, O, Abramov, S, Shofty, B, and Artzi, M. Differentiating small-cell lung cancer from non-small-cell lung cancer brain metastases based on MRI using efficientnet and transfer learning approach. Technol Cancer Res Treat. (2021) 20:15330338211004919. doi: 10.1177/15330338211004919

PubMed Abstract | Crossref Full Text | Google Scholar

68. Tulum, G. Novel radiomic features versus deep learning: differentiating brain metastases from pathological lung cancer types in small datasets. Br J Radiol. (2023) 96:20220841. doi: 10.1259/bjr.20220841

PubMed Abstract | Crossref Full Text | Google Scholar

69. Ortiz-Ramón, R, Larroza, A, Arana, E, and Moratal, D. (2017). A radiomics evaluation of 2D and 3D MRI texture features to classify brain metastases from lung cancer and melanoma. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 493–496

Google Scholar

70. Ortiz-Ramón, R, Larroza, A, Arana, E, and Moratal, D. (2017). Identifying the primary site of origin of MRI brain metastases from lung and breast cancer following a 2D radiomics approach. 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). 1213–1216

Google Scholar

71. Béresová, M, Larroza, A, Arana, E, Varga, J, Balkay, L, and Moratal, D. 2D and 3D texture analysis to differentiate brain metastases on MR images: proceed with caution. MAGMA. (2018) 31:285–94. doi: 10.1007/s10334-017-0653-9

PubMed Abstract | Crossref Full Text | Google Scholar

72. Kniep, HC, Madesta, F, Schneider, T, Hanning, U, Schönfeld, MH, Schön, G, et al. Radiomics of brain MRI: utility in prediction of metastatic tumor type. Radiology. (2019) 290:479–87. doi: 10.1148/radiol.2018180946

PubMed Abstract | Crossref Full Text | Google Scholar

73. Zhang, J, Jin, J, Ai, Y, Zhu, K, Xiao, C, Xie, C, et al. Differentiating the pathological subtypes of primary lung cancer for patients with brain metastases based on radiomics features from brain CT images. Eur Radiol. (2021) 31:1022–8. doi: 10.1007/s00330-020-07183-z

PubMed Abstract | Crossref Full Text | Google Scholar

74. Cao, G, Zhang, J, Lei, X, Yu, B, Ai, Y, Zhang, Z, et al. Differentiating primary tumors for brain metastasis with integrated radiomics from multiple imaging modalities. Dis Markers. (2022) 2022:5147085. doi: 10.1155/2022/5147085

PubMed Abstract | Crossref Full Text | Google Scholar

75. Shi, J, Chen, H, Wang, X, Cao, R, Chen, Y, Cheng, Y, et al. Using radiomics to differentiate brain metastases from lung cancer versus breast cancer, including predicting epidermal growth factor receptor and human epidermal growth factor receptor 2 status. J Comput Assist Tomogr. (2023) 47:924–33. doi: 10.1097/RCT.0000000000001499

PubMed Abstract | Crossref Full Text | Google Scholar

76. Mahmoodifar, S, Pangal, DJ, Neman, J, Zada, G, Mason, J, Salhia, B, et al. Comparative analysis of the spatial distribution of brain metastases across several primary cancers using machine learning and deep learning models. J Neuro-Oncol. (2024) 167:501–8. doi: 10.1007/s11060-024-04630-5

PubMed Abstract | Crossref Full Text | Google Scholar

77. Wang, S, Chen, X, McAuley, J, Cripps, S, and Yao, L. Plug-and-play model-agnostic counterfactual policy synthesis for deep reinforcement learning-based recommendation. IEEE Trans Neural Netw Learn Syst. (2023) 36:1044–55. doi: 10.1109/TNNLS.2023.3329808

Crossref Full Text | Google Scholar

78. Jiao, T, Li, F, Cui, Y, Wang, X, Li, B, Shi, F, et al. Deep learning with an attention mechanism for differentiating the origin of brain metastasis using MR images. J Magn Reson Imaging. (2023) 58:1624–35. doi: 10.1002/jmri.28695

PubMed Abstract | Crossref Full Text | Google Scholar

79. Li, Y, Yu, R, Chang, H, Yan, W, Wang, D, Li, F, et al. Identifying pathological subtypes of brain metastasis from lung cancer using MRI-based deep learning approach: a multicenter study. J Imaging Inform Med. (2024) 37:976–87. doi: 10.1007/s10278-024-00988-0

Crossref Full Text | Google Scholar

80. Zhu, J, Zou, L, Xie, X, Xu, R, Tian, Y, and Zhang, B. 2.5D deep learning based on multi-parameter MRI to differentiate primary lung cancer pathological subtypes in patients with brain metastases. Eur J Radiol. (2024) 180:111712. doi: 10.1016/j.ejrad.2024.111712

PubMed Abstract | Crossref Full Text | Google Scholar

81. Zoto Mustafayev, T, Turna, M, Bolukbasi, Y, Tezcanli, E, Guney, Y, Dincbas, FO, et al. Clinical and radiological effects of bevacizumab for the treatment of radionecrosis after stereotactic brain radiotherapy. BMC Cancer. (2024) 24:918. doi: 10.1186/s12885-024-12643-6

PubMed Abstract | Crossref Full Text | Google Scholar

82. McCall, NS, Lu, A, Hopkins, BD, Qian, D, Hoang, KB, Olson, JJ, et al. Risk of late radiation necrosis more than 5 years after stereotactic radiosurgery. J Neurosurg. (2024) 142:1117–24. doi: 10.3171/2024.6.JNS232187

Crossref Full Text | Google Scholar

83. Lee, D, Riestenberg, RA, Haskell-Mendoza, A, and Bloch, O. Brain metastasis recurrence versus radiation necrosis: evaluation and treatment. Neurosurg Clin N Am. (2020) 31:575–87. doi: 10.1016/j.nec.2020.06.007

PubMed Abstract | Crossref Full Text | Google Scholar

84. Menoux, I, Noël, G, Namer, I, and Antoni, D. PET scan and NMR spectroscopy for the differential diagnosis between brain radiation necrosis and tumour recurrence after stereotactic irradiation of brain metastases: place in the decision tree. Cancer Radiother. (2017) 21:389–97. doi: 10.1016/j.canrad.2017.03.003

PubMed Abstract | Crossref Full Text | Google Scholar

85. Larroza, A, Moratal, D, Paredes-Sánchez, A, Soria-Olivas, E, Chust, ML, Arribas, LA, et al. Support vector machine classification of brain metastasis and radiation necrosis based on texture analysis in MRI. J Magn Reson Imaging. (2015) 42:1362–8. doi: 10.1002/jmri.24913

PubMed Abstract | Crossref Full Text | Google Scholar

86. Tiwari, P, Prasanna, P, Wolansky, L, Pinho, M, Cohen, M, Nayate, AP, et al. Computer-extracted texture features to distinguish cerebral radionecrosis from recurrent brain tumors on multiparametric MRI: a feasibility study. AJNR Am J Neuroradiol. (2016) 37:2231–6. doi: 10.3174/ajnr.A4931

PubMed Abstract | Crossref Full Text | Google Scholar

87. Kim, TH, Yun, TJ, Park, CK, Kim, TM, Kim, JH, Sohn, CH, et al. Combined use of susceptibility weighted magnetic resonance imaging sequences and dynamic susceptibility contrast perfusion weighted imaging to improve the accuracy of the differential diagnosis of recurrence and radionecrosis in high-grade glioma patients. Oncotarget. (2016) 8:20340–53. doi: 10.18632/oncotarget.13050

PubMed Abstract | Crossref Full Text | Google Scholar

88. Yoon, RG, Kim, HS, Koh, MJ, Shim, WH, Jung, SC, Kim, SJ, et al. Differentiation of recurrent glioblastoma from delayed radiation necrosis by using voxel-based multiparametric analysis of MR imaging data. Radiology. (2017) 285:206–13. doi: 10.1148/radiol.2017161588

PubMed Abstract | Crossref Full Text | Google Scholar

89. Zhang, Z, Yang, J, Ho, A, Jiang, W, Logan, J, Wang, X, et al. A predictive model for distinguishing radiation necrosis from tumour progression after gamma knife radiosurgery based on radiomic features from MR images. Eur Radiol. (2018) 28:2255–63. doi: 10.1007/s00330-017-5154-8

PubMed Abstract | Crossref Full Text | Google Scholar

90. Peng, L, Parekh, V, Huang, P, Lin, DD, Sheikh, K, Baker, B, et al. Distinguishing true progression from radionecrosis after stereotactic radiation therapy for brain metastases with machine learning and radiomics. Int J Radiat Oncol Biol Phys. (2018) 102:1236–43. doi: 10.1016/j.ijrobp.2018.05.041

PubMed Abstract | Crossref Full Text | Google Scholar

91. Chen, X, Parekh, VS, Peng, L, Chan, MD, Redmond, KJ, Soike, M, et al. Multiparametric radiomic tissue signature and machine learning for distinguishing radiation necrosis from tumor progression after stereotactic radiosurgery. Neuro-Oncol Adv. (2021) 3:vdab150. doi: 10.1093/noajnl/vdab150

PubMed Abstract | Crossref Full Text | Google Scholar

92. Salari, E, Elsamaloty, H, Ray, A, Hadziahmetovic, M, and Parsai, EI. Differentiating radiation necrosis and metastatic progression in brain tumors using radiomics and machine learning. Am J Clin Oncol. (2023) 46:486–95. doi: 10.1097/COC.0000000000001036

PubMed Abstract | Crossref Full Text | Google Scholar

93. Zhao, J, Vaios, E, Yang, Z, Lu, K, Floyd, S, Yang, D, et al. Radiogenomic explainable AI with neural ordinary differential equation for identifying post-SRS brain metastasis radionecrosis. Med Phys. (2025) 52:2661–74. doi: 10.1002/mp.17635

PubMed Abstract | Crossref Full Text | Google Scholar

94. Nomura, Y, Hanaoka, S, Takenaga, T, Nakao, T, Shibata, H, Miki, S, et al. Preliminary study of generalized semiautomatic segmentation for 3D voxel labeling of lesions based on deep learning. Int J Comput Assist Radiol Surg. (2021) 16:1901–13. doi: 10.1007/s11548-021-02504-z

PubMed Abstract | Crossref Full Text | Google Scholar

95. Liang, Y, Lee, K, Bovi, JA, Palmer, JD, Brown, PD, Gondi, V, et al. Deep learning-based automatic detection of brain metastases in heterogenous multi-institutional magnetic resonance imaging sets: an exploratory analysis of NRG-CC001. Int J Radiat Oncol Biol Phys. (2022) 114:529–36. doi: 10.1016/j.ijrobp.2022.06.081

PubMed Abstract | Crossref Full Text | Google Scholar

96. Bunkhumpornpat, C, Boonchieng, E, Chouvatut, V, and Lipsky, D. FLEX-SMOTE: synthetic over-sampling technique that flexibly adjusts to different minority class distributions. Patterns. (2024) 5:101073. doi: 10.1016/j.patter.2024.101073

PubMed Abstract | Crossref Full Text | Google Scholar

97. Ottesen, JA, Yi, D, Tong, E, Iv, M, Latysheva, A, Saxhaug, C, et al. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform. (2023) 16:1056068. doi: 10.3389/fninf.2022.1056068

PubMed Abstract | Crossref Full Text | Google Scholar

98. Ziyaee, H, Cardenas, CE, Yeboa, DN, Li, J, Ferguson, SD, Johnson, J, et al. Automated brain metastases segmentation with a deep dive into false-positive detection. Adv Radiat Oncol. (2023) 8:101085. doi: 10.1016/j.adro.2022.101085

PubMed Abstract | Crossref Full Text | Google Scholar

99. Yin, S, Luo, X, Yang, Y, Shao, Y, Ma, L, Lin, C, et al. Development and validation of a deep-learning model for detecting brain metastases on 3D post-contrast MRI: a multi-center multi-reader evaluation study. Neuro-Oncol. (2022) 24:1559–70. doi: 10.1093/neuonc/noac025

PubMed Abstract | Crossref Full Text | Google Scholar

100. Yoo, SK, Kim, TH, Chun, J, Choi, BS, Kim, H, Yang, S, et al. Deep-learning-based automatic detection and segmentation of brain metastases with small volume for stereotactic ablative radiotherapy. Cancer. (2022) 14:2555. doi: 10.3390/cancers14102555

PubMed Abstract | Crossref Full Text | Google Scholar

101. Menoux, I, Armspach, JP, Noël, G, and Antoni, D. Imaging methods used in the differential diagnosis between brain tumour relapse and radiation necrosis after stereotactic radiosurgery of brain metastases: literature review. Cancer Radiother. (2016) 20:837–45. doi: 10.1016/j.canrad.2016.07.098

PubMed Abstract | Crossref Full Text | Google Scholar

102. Zhang, Y, Zhang, H, Zhang, H, Ouyang, Y, Su, R, Yang, W, et al. Glioblastoma and solitary brain metastasis: differentiation by integrating demographic-MRI and deep-learning radiomics signatures. J Magn Reson Imaging. (2024) 60:909–20. doi: 10.1002/jmri.29123

Crossref Full Text | Google Scholar

103. Wang, Y, Hu, Y, Chen, S, Deng, H, Wen, Z, He, Y, et al. Improved automatic segmentation of brain metastasis gross tumor volume in computed tomography images for radiotherapy: a position attention module for U-Net architecture. Quant Imaging Med Surg. (2024) 14:4475–89. doi: 10.21037/qims-23-1627

PubMed Abstract | Crossref Full Text | Google Scholar

104. Wang, Y, Wen, Z, Su, L, Deng, H, Gong, J, Xiang, H, et al. Improved brain metastases segmentation using generative adversarial network and conditional random field optimization mask R-CNN. Med Phys. (2024) 51:5990–6001. doi: 10.1002/mp.17176

PubMed Abstract | Crossref Full Text | Google Scholar

105. Li, T, Gan, T, Wang, J, Long, Y, Zhang, K, and Liao, M. Application of CT radiomics in brain metastasis of lung cancer: a systematic review and meta-analysis. Clin Imaging. (2024) 114:110275. doi: 10.1016/j.clinimag.2024.110275

Crossref Full Text | Google Scholar

106. Zhang, HW, Wang, YR, Hu, B, Song, B, Wen, ZJ, Su, L, et al. Using machine learning to develop a stacking ensemble learning model for the CT radiomics classification of brain metastases. Sci Rep. (2024) 14:28575. doi: 10.1038/s41598-024-80210-x

PubMed Abstract | Crossref Full Text | Google Scholar

107. Gong, J, Wang, T, Wang, Z, Chu, X, Hu, T, Li, M, et al. Enhancing brain metastasis prediction in non-small cell lung cancer: a deep learning-based segmentation and CT radiomics-based ensemble learning model. Cancer Imaging. (2024) 24:1. doi: 10.1186/s40644-023-00623-1

PubMed Abstract | Crossref Full Text | Google Scholar

108. Chen, Z, Wang, W, Zhao, Z, Su, F, Men, A, and Meng, H. (2024). Practical DG: perturbation distillation on vision-language models for hybrid domain generalization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23501–23511.

Google Scholar

109. Li, T, Sahu, AK, Talwalkar, A, and Smith, V. Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag. (2020) 37:50–60. doi: 10.1109/MSP.2020.2975749

Crossref Full Text | Google Scholar

110. Jiménez-Sánchez, A, Tardy, M, Ballester, MA, Mateus, D, and Piella, G. Memory-aware curriculum federated learning for breast cancer classification. Comput Methods Prog Biomed. (2023) 229:107318. doi: 10.1016/j.cmpb.2022.107318

Crossref Full Text | Google Scholar

111. Feng, B, Shi, J, Huang, L, Yang, Z, Feng, ST, Li, J, et al. Robustly federated learning model for identifying high-risk patients with postoperative gastric cancer recurrence. Nat Commun. (2024) 15:742. doi: 10.1038/s41467-024-44946-4

PubMed Abstract | Crossref Full Text | Google Scholar

112. Adnan, KM, Ghazal, TM, Saleem, M, Farooq, MS, Yeun, CY, Ahmad, M, et al. Deep learning driven interpretable and informed decision making model for brain tumour prediction using explainable AI. Sci Rep. (2025) 15:19223. doi: 10.1038/s41598-025-03358-0

PubMed Abstract | Crossref Full Text | Google Scholar

113. Chen, Y, Guo, W, Li, Y, Lin, H, Dong, D, Qi, Y, et al. Differentiation of glioblastoma and solitary brain metastasis using brain-tumor interface radiomics features based on MR images: a multicenter study. Acad Radiol. (2025) 32:4164–76. doi: 10.1016/j.acra.2025.04.008

PubMed Abstract | Crossref Full Text | Google Scholar

114. Sayres, R, Taly, A, Rahimy, E, Blumer, K, Coz, D, Hammel, N, et al. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology. (2019) 126:552–64. doi: 10.1016/j.ophtha.2018.11.016

PubMed Abstract | Crossref Full Text | Google Scholar

115. Cassinelli Petersen, G, Bousabarah, K, Verma, T, von Reppert, M, Jekel, L, Gordem, A, et al. Real-time PACS-integrated longitudinal brain metastasis tracking tool provides comprehensive assessment of treatment response to radiosurgery. Neuro-Oncol Adv. (2022) 4:vdac116. doi: 10.1093/noajnl/vdac116

PubMed Abstract | Crossref Full Text | Google Scholar

116. Luo, X, Yang, Y, Yin, S, Li, H, Shao, Y, Zheng, D, et al. Automated segmentation of brain metastases with deep learning: a multi-center, randomized crossover, multi-reader evaluation study. Neuro-Oncol. (2024) 26:2140–51. doi: 10.1093/neuonc/noae113

PubMed Abstract | Crossref Full Text | Google Scholar

117. Xu, X, Bao, L, Tan, Y, Zhu, L, Kong, F, and Wang, W. 1000-case reader study of radiologists' performance in interpretation of automated breast volume scanner images with a computer-aided detection system. Ultrasound Med Biol. (2018) 44:1694–702. doi: 10.1016/j.ultrasmedbio.2018.04.020

PubMed Abstract | Crossref Full Text | Google Scholar

118. Yoo, H, Lee, SH, Arru, CD, Doda Khera, R, Singh, R, Siebert, S, et al. AI-based improvement in lung cancer detection on chest radiographs: results of a multi-reader study in NLST dataset. Eur Radiol. (2021) 31:9664–74. doi: 10.1007/s00330-021-08074-7

PubMed Abstract | Crossref Full Text | Google Scholar

119. Kim, M, Yun, J, Cho, Y, Shin, K, Jang, R, Bae, HJ, et al. Deep learning in medical imaging. Neurospine. (2019) 16:657–68. doi: 10.14245/ns.1938396.198

PubMed Abstract | Crossref Full Text | Google Scholar

120. Wagner, MW, Namdar, K, Biswas, A, Monah, S, Khalvati, F, and Ertl-Wagner, BB. Radiomics, machine learning, and artificial intelligence—what the neuroradiologist needs to know. Neuroradiology. (2021) 63:1957–67. doi: 10.1007/s00234-021-02813-9

Crossref Full Text | Google Scholar

121. Noguchi, T, Uchiyama, F, Kawata, Y, Machitori, A, Shida, Y, Okafuji, T, et al. A fundamental study assessing the diagnostic performance of deep learning for a brain metastasis detection task. Magn Reson Med Sci. (2020) 19:184–94. doi: 10.2463/mrms.mp.2019-0063

PubMed Abstract | Crossref Full Text | Google Scholar

122. Zhang, M, Young, GS, Chen, H, Li, J, Qin, L, McFaline-Figueroa, JR, et al. Deep-learning detection of cancer metastases to the brain on MRI. J Magn Reson Imaging. (2020) 52:1227–36. doi: 10.1002/jmri.27129

PubMed Abstract | Crossref Full Text | Google Scholar

123. Kottlors, J, Geissen, S, Jendreizik, H, Große Hokamp, N, Fervers, P, Pennig, L, et al. Contrast-enhanced black blood MRI sequence is superior to conventional T1 sequence in automated detection of brain metastases by convolutional neural networks. Diagnostics. (2021) 11:1016. doi: 10.3390/diagnostics11061016

PubMed Abstract | Crossref Full Text | Google Scholar

124. Cho, J, Kim, YJ, Sunwoo, L, Lee, GP, Nguyen, TQ, Cho, SJ, et al. Deep learning-based computer-aided detection system for automated treatment response assessment of brain metastases on 3D MRI. Front Oncol. (2021) 11:739639. doi: 10.3389/fonc.2021.739639

PubMed Abstract | Crossref Full Text | Google Scholar

125. Park, YW, Jun, Y, Lee, Y, Han, K, An, C, Ahn, SS, et al. Robust performance of deep learning for automatic detection and segmentation of brain metastases using three-dimensional black-blood and three-dimensional gradient echo imaging. Eur Radiol. (2021) 31:6686–95. doi: 10.1007/s00330-021-07783-3

PubMed Abstract | Crossref Full Text | Google Scholar

126. Bouget, D, Pedersen, A, Jakola, AS, Kavouridis, V, Emblem, KE, Eijgelaar, RS, et al. Preoperative brain tumor imaging: models and software for segmentation and standardized reporting. Front Neurol. (2022) 13:932219. doi: 10.3389/fneur.2022.932219

PubMed Abstract | Crossref Full Text | Google Scholar

127. Lee, WK, Yang, HC, Lee, CC, Lu, CF, Wu, CC, Chung, WY, et al. Lesion delineation framework for vestibular schwannoma, meningioma and brain metastasis for gamma knife radiosurgery using stereotactic magnetic resonance images. Comput Methods Prog Biomed. (2023) 229:107311. doi: 10.1016/j.cmpb.2022.107311

PubMed Abstract | Crossref Full Text | Google Scholar

128. Li, R, Guo, Y, Zhao, Z, Chen, M, Liu, X, Gong, G, et al. MRI-based two-stage deep learning model for automatic detection and segmentation of brain metastases. Eur Radiol. (2023) 33:3521–31. doi: 10.1007/s00330-023-09420-7

PubMed Abstract | Crossref Full Text | Google Scholar

129. Prasad, M, Tripathi, S, and Dahal, K. Unsupervised feature selection and cluster center initialization based arbitrary shaped clusters for intrusion detection. Computers & Security. (2020) 99:102062

Google Scholar

130. Fang, X, Yu, F, Yang, G, and Qu, Y. Regression analysis with differential privacy preserving. IEEE access. (2019) 7:129353–129361

Google Scholar

131. Kumar, SS. Advancements in medical image segmentation: A review of transformer models. Computers and Electrical Engineering. (2025) 123:110099

Google Scholar

Keywords: brain metastases, artificial intelligence, deep learning, machine learning, radiotherapy, diagnostic imaging

Citation: Hu Y, Gao C, Wang Y, Wen Z, Yang C, Deng H, Chen S, Li Y, Pang H, Zhou P, Liao B and Luo Y (2025) Artificial intelligence in the task of segmentation and classification of brain metastases images: current challenges and future opportunities. Front. Neurol. 16:1581422. doi: 10.3389/fneur.2025.1581422

Received: 22 February 2025; Accepted: 29 August 2025;
Published: 23 September 2025.

Edited by:

Maria Caffo, University of Messina, Italy

Reviewed by:

Weimin Gao, Barrow Neurological Institute (BNI), United States
Zhenyu Gong, Sun Yat-sen University, China
Suhrud Panchawagh, Mayo Clinic, United States

Copyright © 2025 Hu, Gao, Wang, Wen, Yang, Deng, Chen, Li, Pang, Zhou, Liao and Luo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Haowen Pang, aGFvd2VucGFuZ0Bmb3htYWlsLmNvbQ==; Ping Zhou, emhvdXBpbmcxMUBzd211LmVkdS5jbg==; Bin Liao, YmluYmluemVyb0AxNjMuY29t; Yan Luo, bHVveWFuNDc5MjZAMTYzLmNvbQ==

These authors have contributed equally to this work and share first authorship

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.