ORIGINAL RESEARCH article
Front. Med.
Sec. Regulatory Science
Volume 12 - 2025 | doi: 10.3389/fmed.2025.1594450
This article is part of the Research TopicEthical and Legal Implications of Artificial Intelligence in Public Health: Balancing Innovation and PrivacyView all 14 articles
Reducing Misdiagnosis in AI-Driven Medical Diagnostics: A Multidimensional Framework for Technical, Ethical, and Policy Solutions
Provisionally accepted- 1Shanxi Medical University, Taiyuan, China
- 2Shanxi Provincial Cancer Hospital, Taiyuan, Shanxi Province, China
- 3School of Humanities and Social Sciences; School of Management, Shanxi Medical University, Jinzhong, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Purpose:This study aims to systematically identify and address key barriers to misdiagnosis in AI-driven medical diagnostics. The main research question is how technical limitations, ethical concerns, and unclear accountability hinder safe and equitable use of AI in real-world clinical practice, and what integrated solutions can minimize errors and promote trust. Methods:We conducted a literature review and case analysis across major medical fields, evaluating failure modes such as data pathology, algorithmic bias, and human-AI interaction. Based on these findings, we propose a multidimensional framework combining technical strategies—such as dynamic data auditing and explainability engines—with ethical and policy interventions, including federated learning for bias mitigation and blockchain-based accountability. Results:Our analysis shows that misdiagnosis often results from data bias, lack of model transparency, and ambiguous responsibility. When applied to published case examples and comparative evaluations from the literature, elements of our framework are associated with improvements in diagnostic accuracy, transparency, and equity. Key recommendations include bias monitoring, real-time interpretability dashboards, and legal frameworks for shared accountability. Conclusions:A coordinated, multidimensional approach is essential to reduce the risk of misdiagnosis in AI-supported diagnostics. By integrating robust technical controls, clear ethical guidelines, and defined accountability, our framework provides a practical roadmap for responsible, transparent, and equitable AI adoption in healthcare—improving patient safety, clinician trust, and health equity.
Keywords: Artificial Intelligence (AI) Diagnostics, Misdiagnosis Risk, AI Policy andRegulation, Patient Safety and Trust, ethical responsibility
Received: 16 Mar 2025; Accepted: 10 Oct 2025.
Copyright: © 2025 Li, Yi, Fu, Yang, Duan and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Yue Li, sxjkyxyly@163.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.