Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med.

Sec. Pathology

Volume 12 - 2025 | doi: 10.3389/fmed.2025.1694024

This article is part of the Research TopicDigital Pathology and Telepathology: Integrating AI-driven Sustainable Solutions into Healthcare SystemsView all 3 articles

An Explainable Deep Learning-Based Feature Fusion Model for Acute Lymphoblastic Leukemia Diagnosis and Severity Assessment

Provisionally accepted
Hajra  KhanHajra Khan1Muhammad Zaheer  SajidMuhammad Zaheer Sajid2Nauman  Ali KhanNauman Ali Khan3Muhammad Fareed Hamid  Fareed HamidMuhammad Fareed Hamid Fareed Hamid4Ayman  YoussefAyman Youssef5Areesha  RehmanAreesha Rehman6*Nour  AburaedNour Aburaed7
  • 1Department of Computer Software Engineering, Military College of Signals, National University of Science and Technology, Islamabad 44000, Pakistan;, Islamabad, Pakistan
  • 2University of Missouri Information Technology Department, Columbia, United States
  • 3Department of Computer Software Engineering, Military College of Signals, National University of Science and Technology,, Islamabad, Pakistan
  • 4Department of Computer Software Engineering, Military College of Signals (MCS), National University of Science and Technology,, Islamabad, Pakistan
  • 5Department of Computers and Systems, Electronics Research Institute, Cairo, Egypt
  • 6Bahauddin Zakariya University Faculty of Pharmacy, Multan, Pakistan
  • 7University of Dubai, College of Engineering and IT,, Dubai, United Arab Emirates

The final, formatted version of the article will be published soon.

Abstract: Acute Lymphoblastic Leukemia (ALL) is a malignant blood disorder that primarily affects white blood cells, particularly in children, where early and accurate diagnosis is critical for effective treatment and recovery. This study introduces a novel deep learning framework, termed XIncept-ALL, for automated detection and classification of ALL severity levels. The model integrates pre-trained InceptionV3 and Xception networks through feature fusion blocks, enabling robust representation learning. To enhance performance, data auto-augmentation techniques were applied to address class imbalance and reduce overfitting. Furthermore, Grad-CAM visualizations were employed to highlight discriminative regions of ALL cell images, providing interpretability of the model's predictions and validating that the network focuses on clinically relevant features. The proposed system was evaluated using a newly developed private dataset, Pak-ALL, collected from Pakistani hospitals, along with additional datasets from reliable web sources. Extracted features were classified using an XGBoost classifier into four categories: Benign, Early, Pre, and Pro. Extensive experiments demonstrated the superior performance of the proposed framework, achieving an average accuracy of 99.5% on a challenging external dataset (Iranian dataset). The results highlight the efficiency, scalability, and practicality of the XIncept-ALL model for medical image analysis. By offering both improved classification accuracy and interpretable decision support via Grad-CAM, the proposed approach represents a significant advancement toward reliable computer-aided diagnostic systems for ALL, with strong potential to support clinical decision-making and improve patient outcomes.

Keywords: Acute Lymphoblastic Leukemia (ALL), Feature fusion, deep learning, Convolutional Neural Networks (CNN), image classification, image processing

Received: 27 Aug 2025; Accepted: 15 Oct 2025.

Copyright: © 2025 Khan, Sajid, Khan, Hamid, Youssef, Rehman and Aburaed. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Areesha Rehman, areeshaaimraan@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.