PERSPECTIVE article
Front. Digit. Health
Sec. Ethical Digital Health
This article is part of the Research TopicThe Health and Illness Beliefs and Experiences of Minoritized GroupsView all 9 articles
Explainable and Reproducible AI: Culturally Responsive AI for Health Equity in Minoritized Groups
Provisionally accepted- 1University of South Australia, Adelaide, Australia
- 2Health Equity Research Consortium, London, United Kingdom
- 3Fred Hutchinson Cancer Center, Seattle, United States
- 4The University of Edinburgh, Edinburgh, United Kingdom
- 5Aristotle University of Thessaloniki, Greece, Greece
- 6Universite Paris Cite, Paris, France
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Artificial intelligence (AI) is transforming healthcare by enabling advanced diagnostics, personalized treatments, and improved operational efficiencies.By identifying complex data patterns and correlations, AI could supplement clinical decision-making, enabling more rapid diagnoses and treatment decisions tailored to meet the unique needs of diverse communities. However, realizing these benefits requires that clinical AI models be consistent, reliable, and validated across diverse populations and clinical environments. In addition, as these data patterns and correlations may often be unexpected, AI models require more explainability compared to other medical technologies. This is especially true for complex models, where the processes driving a model to make a prediction are often unclear and uninterpretable to both model developers and medical professionals, resulting in AI models often being described as 'black boxes'.To address this fundamental challenge of interpretability, explainable AI (XAI) has emerged as a critical approach, providing insight (often in a post-hoc manner) into why models provide their given output, and it has been shown that most physicians prefer XAI to non-explainable AI. This commentary therefore explores key considerations needed to ensure that AI promotes health equity in marginalised communities, building on similar shifts toward anticipatory health action that have been explored in humanitarian and climate AI contexts. We argue that equity in AI depends on embedding explainability and reproducibility within culturally responsive frameworks that address historical and structural bias.
Keywords: AI, Explainability, reproducability, Ethnic Minorities, Culturally Responsive AI, Trust building and communication
Received: 15 Aug 2025; Accepted: 01 Dec 2025.
Copyright: © 2025 King-Okoye, Fuller, Tan, Marlow, Fleuriot, Tzatzakis, Kanodia, Odoh, Dubbala and Alvarez Alvarez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Michelle King-Okoye
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
