Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med.

Sec. Precision Medicine

Volume 12 - 2025 | doi: 10.3389/fmed.2025.1618550

This article is part of the Research TopicAI Innovations in Neuroimaging: Transforming Brain AnalysisView all 3 articles

Transfer Deep Learning and Explainable AI Framework for Brain Tumour and Alzheimer's Detection Across Multiple Datasets

Provisionally accepted
  • 1Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
  • 2Anderson University, Anderson, Indiana, United States
  • 3University of South Carolina, Columbia, Missouri, United States
  • 4Northern Border University, Arar, Northern Borders, Saudi Arabia
  • 5King Khalid University, Abha, Saudi Arabia
  • 6Jouf University, Sakakah, Saudi Arabia
  • 7University of Tabuk, Tabuk, Tabuk, Saudi Arabia

The final, formatted version of the article will be published soon.

The pressing need for accurate diagnostic tools in the medical field, particularly for diseases such as brain tumors and Alzheimer's, poses significant challenges to timely and effective treatment. This study presents a novel approach to MRI image classification by integrating transfer learning with Explainable AI (XAI) techniques. The proposed method utilizes a hybrid CNN-VGG16 model, which leverages pre-trained features from the VGG16 architecture to enhance classification performance across three distinct MRI datasets: brain tumor classification, Alzheimer's disease detection, and a third dataset of brain tumors. A comprehensive preprocessing pipeline ensures optimal input quality and variability, including image normalization, resizing, and data augmentation. The model achieves accuracy rates of 94% on the brain tumor dataset, 81% on the augmented Alzheimer dataset, and 93% on the third dataset, underscoring its capability to differentiate various neurological conditions. Furthermore, the integration of SHapley Additive exPlanations (SHAP) provides a transparent view of the model's decisionmaking process, allowing clinicians to understand which regions of the MRI scans contribute to the classification outcomes. This research demonstrates the potential of combining advanced deep learning techniques with explainability to improve diagnostic accuracy and trust in AI applications within healthcare.

Keywords: MRI image classification, Transfer Learning, Explainable AI (XAI), hybrid CNN-VGG16 model, brain tumors, Alzheimer's disease, Shap, And Medical Imaging

Received: 26 Apr 2025; Accepted: 03 Jun 2025.

Copyright: © 2025 Alsubai, Ojo, Nathaniel, Ayari, Baili, Almadhor and Al Hejaili. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ahmad Almadhor, Jouf University, Sakakah, Saudi Arabia

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.