ORIGINAL RESEARCH article
Front. Med.
Sec. Pathology
Volume 12 - 2025 | doi: 10.3389/fmed.2025.1693603
This article is part of the Research TopicDigital Pathology and Telepathology: Integrating AI-driven Sustainable Solutions into Healthcare SystemsView all articles
CausalX-Net: A Causality-Guided Explainable Segmentation Network for Brain Tumors
Provisionally accepted- 1Rajeev Gandhi Memorial College of Engineering and Technology, Nandyal, India
- 2G Pullaiah College of Engineering and Technology, Kurnool, India
- 3G Pulla Reddy Engineering College, Kurnool, India
- 4King Khalid University, Abha, Saudi Arabia
- 5Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
- 6Soonchunhyang University, Asan-si, Republic of Korea
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Brain tumors represent a significant health challenge in India, with approximately 28,000 new cases diagnosed annually. Conventional deep learning approaches for MRI-based segmentation often struggle with irregular tumor boundaries, heterogeneous intensity patterns, and complex spatial relationships, resulting in limited clinical interpretability despite high numerical accuracy. This study introduces CausalX-Net, a causality-guided explainable segmentation network for brain tumor analysis from multi-modal MRI. Unlike purely correlation-based models, CausalX-Net leverages structural causal modeling and interventional reasoning to identify and quantify the causal influence of imaging features and spatial regions on segmentation outcomes. Through counterfactual analysis, the framework can provide clinically relevant "what-if" explanations, such as predicting changes in tumor classification if specific modalities, regions, or features are altered. Evalu-ated on the BraTS 2021 dataset, CausalX-Net achieved a Dice Similarity Coefficient of 92.5%, outperforming state-of-the-art CNN-based baselines by 4.3% while maintaining competitive inference efficiency. Furthermore, causal attribution maps and intervention-based sensitivity analyses enhance trust and transparency, offering radiologists actionable insights for diagnosis and treatment planning. This research demonstrates that integrating causal 1 inference into segmentation not only improves accuracy but also delivers interpretable, decision-supportive explanations, representing a significant step toward transparent and reliable AI-assisted neuroimaging in clinical settings.
Keywords: CausalX-Net, Brain tumor segmentation, Causal Effect (CE) Maps, Counterfactual explanations, explainable artificial intelligence (XAI), deep learning
Received: 27 Aug 2025; Accepted: 26 Sep 2025.
Copyright: © 2025 Patike Kiran Rao, Prakash, Pasha, Algarni, Ayadi, Cho and Nam. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Dr Patike Kiran Rao, kiranraocse@gmail.com
Yunyoung Nam, ynam@sch.ac.kr
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.