ORIGINAL RESEARCH article
Front. Oncol.
Sec. Cancer Imaging and Image-directed Interventions
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1585891
This article is part of the Research TopicAdvances in Oncological Imaging TechniquesView all 6 articles
BrainTumNet: Multi-Task Deep Learning Framework for Brain Tumor Segmentation and Classification Using Adaptive Masked Transformers
Provisionally accepted- 1Nanchang University, Nanchang, China
- 2Department of Neurosurgery, Nanjing Jinling Hospital, Nanjing, Liaoning Province, China
- 3Second People’s Hospital of Yibin, Yibin, China
- 4Henan University, Kaifeng, Henan Province, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Background and Objective: Accurate diagnosis of brain tumors significantly impacts patient prognosis and treatment planning. Traditional diagnostic methods primarily rely on clinicians' subjective interpretation of medical images, which is heavily dependent on physician experience and limited by time consumption, fatigue, and inconsistent diagnoses. Recently, deep learning technologies, particularly Convolutional Neural Networks (CNN), have achieved breakthrough advances in medical image analysis, offering a new paradigm for automated precise diagnosis. However, existing research largely focuses on single-task modeling, lacking comprehensive solutions that integrate tumor segmentation with classification diagnosis. This study aims to develop a multi-task deep learning model for precise brain tumor segmentation and type classification. Methods: The study included 485 pathologically confirmed cases, comprising T1-enhanced MRI sequence images of high-grade gliomas, metastatic tumors, and meningiomas. The dataset was proportionally divided into training (378 cases), testing (109 cases), and external validation (51 cases) sets. We designed and implemented BrainTumNet, a deep learning-based multi-task framework featuring an improved encoder-decoder architecture, adaptive masked Transformer, and multi-scale feature fusion strategy to simultaneously perform tumor region segmentation and pathological type classification. Five-fold cross-validation was employed for result verification. Results: In the test set evaluation, BrainTumNet achieved an Intersection over Union (IoU) of 0.921, Hausdorff Distance (HD) of 12.13, and Dice Similarity Coefficient (DSC) of 0.91 for tumor segmentation. For tumor classification, it attained a classification accuracy of 93.4% with an Area Under the ROC Curve (AUC) of 0.96. Performance remained stable on the external validation set, confirming the model's generalization capability. Conclusion: The proposed BrainTumNet model achieves high-precision diagnosis of brain tumor segmentation and classification through a multi-task learning strategy. Experimental results demonstrate the model's strong potential for clinical application, providing objective and reliable auxiliary information for preoperative assessment and treatment decision-making in brain tumor cases.
Keywords: Brain tumor diagnosis, deep learning, Multi-task learning, Medical Image Analysis, Convolutional Neural Networks
Received: 01 Mar 2025; Accepted: 28 Apr 2025.
Copyright: © 2025 Lv, shu, Liang, Qiu, Xiong, ye, li, liu, niu, chen and rao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Cheng Lv, Nanchang University, Nanchang, China
shengbo chen, Nanchang University, Nanchang, China
hong rao, Nanchang University, Nanchang, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.