ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Brain Imaging Methods
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1682603
This article is part of the Research TopicAdvancing neuroimaging diagnostics with machine learning and computational modelsView all articles
MAUNet: A Mixed Attention U-Net with Spatial Multi-Dimensional Convolution and Contextual Feature Calibration for 3D Brain Tumor Segmentation in Multimodal MRI
Provisionally accepted- 1The First Affiliated Hospital of Henan University of Science and Technology, Luoyang, China
- 2Henan University of Science and Technology, Luoyang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Brain tumors present a significant threat to human health, demanding accurate diagnostic and therapeutic strategies. Traditional manual analysis of medical imaging data is inefficient and prone to errors, especially considering the heterogeneous morphological characteristics of tumors. Therefore, to overcome these limitations, we propose MAUNet, a novel 3D brain tumor segmentation model based on U-Net. MAUNet incorporates a Spatial Convolution (SConv) module, a Contextual Feature Calibration (CFC) module, and a gating mechanism to address these challenges. First, the SConv module employs a Spatial Multi-Dimensional Weighted Attention (SMWA) mechanism to enhance feature representation across channel, height, width, and depth. Second, the CFC module constructs cascaded pyramid pooling layers to extract hierarchical contextual patterns, dynamically calibrating pixel-context relationships by calculating feature similarities. Finally, to optimize feature fusion efficiency, a gating mechanism refines feature fusion in skip connections, emphasizing critical features while suppressing irrelevant ones. Extensive experiments on the BraTS2019 and BraTS2020 datasets demonstrate the superiority of MAUNet, achieving average Dice scores of 84.5% and 83.8%, respectively. Ablation studies further validate the effectiveness of each proposed module, highlighting their contributions to improved segmentation accuracy. Our work provides a robust and efficient solution for automated brain tumor segmentation, offering significant potential for clinical applications.
Keywords: Brain tumor segmentation, Convolution, deep learning, Multi scale, Mixedattention
Received: 09 Aug 2025; Accepted: 23 Sep 2025.
Copyright: © 2025 Chen, Cai, Tan, Lv, Zhang and Du. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Wenna Chen, chenwenna0408@163.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.