ORIGINAL RESEARCH article
Front. Med.
Sec. Precision Medicine
This article is part of the Research TopicAdvancements and Challenges in AI-Driven Healthcare InnovationView all 5 articles
Interpretable Machine Learning Models for Beta Thalassemia Prediction: An Explainable AI Approach for Smart Healthcare 5.0
Provisionally accepted- 1green international university lahore, Lahore, Pakistan
- 2College of Information Science and Technology, Hainan Normal University, Haikou 571158, China, Haikou, China
- 3Department of Computer Sciences, Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Saudi Arabia, rafha, Saudi Arabia
- 4Manchester Metropolitan University, Manchester, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
An inherited blood disorder that bounds the production of beta globin, an important protein that has a handsome contribution in the development of hemoglobin and Red Blood Cells (RBC). This protein also enables cells to carry oxygen to tissues throughout the human body. Genetic variation in hemoglobin beta gene signals the body to make beta globin chains is the cause of beta thelasemia with three major types major, intermediate and minor. There is a need of an expert system for the diagnosis of this particular disease. This study introduces an interpretable Expert system for the prediction of Beta Thelesemia incorporating Explainable AI (XAI) techniques to enhance clinical needs. Principle component Analysis (PCA) with Synthetic Minority Over-sampling Technique (SMOTE) is applied on the Beta Thalassemia Carrier (BTC) dataset 5066 patients to reduce the dimentiality and balance the output classes. Machime learning classifiers such as Neural Networks, Recurrent Neural Networks and Long Short Term Memory (LSTM) is applied and the latest one will give the 99.30% accuracy, 99.33% precision, 99.33% recall, 99.33% specificity, and 99.33% f1 score. Furthermore ensuring the models transparency and interpretability, the proposed method integrates SHapley Ad-ditive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), enabling both global and local interpretability of model predictions. SHAP gives us insight into important features at the global level, while LIME explains individual predictions, making the model's decisions more comprehensible for clinical applications.
Keywords: Beta Thalassemia Carrier (BTC), Explainable AI (XAI), neural networks, recurrent neural networks, Long short term memory
Received: 19 Aug 2025; Accepted: 27 Nov 2025.
Copyright: © 2025 Abbass, Shoaib, khan, Bilal, Algarni and Sarwar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Raheem Sarwar
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.