ORIGINAL RESEARCH article
Front. Digit. Health
Sec. Health Informatics
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1608949
Machine Learning and eXplainable Artificial Intelligence (XAI) to Predict and Interpret Lead Toxicity in Pregnant Woman and Unborn Baby
Provisionally accepted- 1Intelligent Systems Research Centre, Ulster University, Derry, United Kingdom
- 2King George's Medical University, Lucknow, Uttar Pradesh, India
- 3Ulster University, Coleraine, United Kingdom
- 4Era University, Lucknow, India
- 5Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Lead toxicity is a well-recognised environmental health issue, with prenatal exposure posing significant risks to infants. One major pathway of exposure to infants is maternal lead transfer during pregnancy. Therefore, accurately characterising maternal lead levels is critical for enabling targeted and personalised healthcare interventions. Current detection methods for lead poisoning are based on laboratory blood tests, which are not feasible for the screening of a wide population due to cost, accessibility, and logistical constraints. To address this limitation, our previous research proposed a novel machine learning (ML)-based model that predicts lead exposure levels in pregnant women using sociodemographic data alone. However, for such predictive models to gain broader acceptance, especially in clinical and public health settings, transparency and interpretability are essential. Understanding the reasoning behind the predictions of the model is crucial to building trust and facilitating informed decision-making. In this study, we present the first application of an eXplainable Artificial Intelligence (XAI) framework to interpret predictions made by our ML-based lead exposure model. Using a dataset of 200 blood samples and 12 sociodemographic features, a Random Forest classifier was trained, achieving an accuracy of 84.52%. We applied two widely used XAI methods, SHAP (SHapley additive explanations) and LIME (Local Interpretable Model-Agnostic Explanations), to provide insight into how each input feature contributed to the model's predictions.
Keywords: machine learning, Classification, predictive modelling, Explainable AI, lead toxicity
Received: 09 Apr 2025; Accepted: 08 May 2025.
Copyright: © 2025 Chaurasia, Yogarajah, Ali Mahdi, Mcclean, AHMAD, Jafar and Singh. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Pratheepan Yogarajah, Intelligent Systems Research Centre, Ulster University, Derry, BT48 7JL, United Kingdom
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.