SYSTEMATIC REVIEW article
Front. Digit. Health
Sec. Ethical Digital Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1653631
This article is part of the Research TopicEthical Considerations of Large Language Models: Challenges and Best PracticesView all 3 articles
A Systematic Review of Ethical Considerations of Large Language Models (LLMs) in Healthcare and Medicine
Provisionally accepted- 1Riphah International University - Lahore Campus, Lahore, Pakistan
- 2Luleå University of Technology, Luleå, Sweden
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The rapid integration of large language models (LLMs) into healthcare offers significant potential for improving diagnosis, treatment planning, and patient engagement. However, it also presents serious ethical challenges that remain incompletely addressed. In this review, we analyzed 27 peer-reviewed studies published between 2017 and 2025 across four major open-access databases using strict eligibility criteria, robust synthesis methods, and established guidelines to explicitly examine the ethical aspects of deploying LLMs in clinical settings. We explore four key aspects, including the main ethical issues arising from the use of LLMs in healthcare, the prevalent model architectures employed in ethical analyses, the healthcare application domains that are most frequently scrutinized, and the publication and bibliographic patterns characterizing this literature. Our synthesis reveals that bias and fairness (n = 7, 25.9%) are the most frequently discussed concerns, followed by safety, reliability, transparency, accountability, and privacy, and that the GPT family predominates (n = 14, 51.8%) among examined models. While privacy protection and bias mitigation received notable attention in the literature, no existing review has systematically addressed the comprehensive ethical issues surrounding LLMs. Most previous studies focus narrowly on specific clinical subdomains and lack a comprehensive methodology. As a systematic mapping of open-access literature, this synthesis identifies dominant ethical patterns, but it is not exhaustive of all ethical work on LLMs in healthcare. We also synthesize identified challenges, outline future research directions and include a provisional ethical integration framework to guide clinicians, developers, and policymakers in the responsible integration of LLMs into clinical workflows.
Keywords: Artificial intelligence (AI), deep learning, Large Language Models (LLMs), ChatGPT, Bioethical Issues, Bias, fairness, Privacy
Received: 25 Jun 2025; Accepted: 25 Aug 2025.
Copyright: © 2025 Fareed, Fatima, Uddin, Ahmed and Sattar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Muhammad Awais Sattar, Luleå University of Technology, Luleå, Sweden
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.