SYSTEMATIC REVIEW article
Front. Digit. Health
Sec. Ethical Digital Health
This article is part of the Research TopicEthical Considerations of Large Language Models: Challenges and Best PracticesView all 6 articles
Ethical and Practical Challenges of Generative AI in Healthcare and Proposed Solutions: A Survey
Provisionally accepted- 1The University of Tennessee Knoxville, Knoxville, United States
- 2The University of Mississippi, University, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract Background: Generative artificial intelligence (AI) is rapidly transforming healthcare, but its adoption introduces significant ethical and practical chal-lenges. Algorithmic bias, ambiguous liability, lack of transparency, and data privacy risks can undermine patient trust and create health disparities, making their resolution critical for responsible AI integration. Objectives: This systematic review analyzes the generative AI landscape in healthcare. Our objectives were to: (1) identify AI applications and their associated ethical and practical challenges; (2) evaluate current data-centric, model-centric, and regulatory solutions; and (3) propose a framework for responsible AI deployment. Methods: Following the PRISMA 2020 statement, we conducted a systematic review of PubMed and Google Scholar for articles published between January 2020 and May 2025. A multi-stage screening process yielded 54 articles, which were analyzed using a thematic narrative synthesis. Results: Our review confirmed AI's growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias from non-representative data, unresolved legal liability, the "black box" nature of complex models, and significant data privacy risks. Proposed solutions are multifaceted, spanning technical (e.g., explainable AI), procedural (e.g., stakeholder oversight), and regulatory strategies. 1 Discussion: Current solutions are fragmented and face significant implementation barriers. Technical fixes are insufficient without robust governance, clear legal guidelines, and comprehensive professional education. Gaps in global regulatory harmonization and frameworks ill-suited for adaptive AI persist. A multi-layered, socio-technical approach is essential to build trust and ensure the safe, equitable, and ethical deployment of generative AI in healthcare. Conclusions: The review confirmed that generative AI has a growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias stemming from non-representative data, unresolved legal liability, the "black box" nature of complex models, and significant data privacy risks. These challenges can undermine patient trust and create health disparities. Proposed solutions are multifaceted, spanning technical (such as explainable AI), procedural (like stakeholder oversight), and regulatory strategies. Keywords: Generative artificial intelligence, Healthcare ethics, Ethical challenges, Practical challenges, Large language models, Bias mitigation, Systematic review, Solution strategies
Keywords: Generative artificial intelligence, healthcare ethics, ethical challenges, Practical challenges, Large language models, Bias mitigation, Systematic review, Solution strategies
Received: 25 Aug 2025; Accepted: 03 Nov 2025.
Copyright: © 2025 Tung, Hasnaeen and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Shah Md Nehal Hasnaeen, nheee14@gmail.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.