BRIEF RESEARCH REPORT article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 8 - 2025 | doi: 10.3389/frai.2025.1525937
This article is part of the Research TopicEthical and Legal Implications of Artificial Intelligence in Public Health: Balancing Innovation and PrivacyView all 6 articles
Evaluating Artificial Intelligence Bias in Nephrology: The Role of Diversity, Equity, and Inclusion in AI-Driven Decision-Making and Ethical Regulation
Provisionally accepted- 1Division of Nephrology and Hypertension, Mayo Clinic, Rochester, United States
- 2Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Jacksonville, FL, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Background: The integration of Artificial Intelligence (AI) in nephrology raises concerns about bias, fairness, and ethical decision-making, particularly regarding Diversity, Equity, and Inclusion (DEI). AI-driven models, including Large Language Models (LLMs) like ChatGPT, may unintentionally reinforce disparities in patient care and workforce recruitment. This study evaluates how ChatGPT 3.5 and 4.0 handle DEI-related ethical considerations in nephrology, emphasizing the need for improved regulatory oversight to ensure equitable AI deployment.Methods: Conducted in March 2024, this study used ChatGPT 3.5 and 4.0 to evaluate ethical AI decision-making in nephrology. Eighty simulated cases assessed ChatGPT's responses across diverse nephrology topics. Factors considered included age, sex, gender identity, race, ethnicity, religion, cultural beliefs, socioeconomic status, education, family structure, employment, insurance, geographic location, disability, mental health, language proficiency, and technology access.Results: ChatGPT 3.5 responded to all scenarios without refusal, contradicting DEI principles by not rejecting potentially discriminatory criteria. In contrast, ChatGPT 4.0 declined to make decisions based on discriminatory factors in 13 (16.3%) scenarios initially and in 5 (6.3%) during the second round.Conclusion: While ChatGPT 4.0 shows progress in ethical AI decision-making, its limited recognition of bias and DEI considerations underscores the need for robust AI regulation in nephrology. AI governance must incorporate structured DEI guidelines, ongoing bias detection, and ethical oversight to prevent disparities. This study highlights the importance of transparency, fairness, and inclusivity in AI development, calling for collaboration among AI developers, nephrologists, policymakers, and patient communities to ensure AI serves as an equitable tool in nephrology.
Keywords: artificial intelligence, diversity,, Equity,, Inclusion,, Nephrology, , ChatGPT,, Artificial IntelligenceBias Detection, Ethical AI Regulation
Received: 10 Nov 2024; Accepted: 12 May 2025.
Copyright: © 2025 Balakrishnan, Thongprayoon, Wathanavasin, Miao, Mao, Craici and Cheungpasitporn. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Wisit Cheungpasitporn, Division of Nephrology and Hypertension, Mayo Clinic, Rochester, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.