Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Public Health

Sec. Digital Public Health

Medical Damage Liability Risk of Medical AI: From the Perspective of DeepSeek's Large-scale Deployment in Chinese hospitals

Provisionally accepted
Ye  WangYe Wang1*Zishi  ZhouZishi Zhou2
  • 1Hunan University of Science and Technology, Xiangtan, China
  • 2Hunan University, Changsha, China

The final, formatted version of the article will be published soon.

The field of healthcare is one of the important areas for the application of artificial intelligence (AI). This study introduces the current deployment of the AI model DeepSeek in Chinese hospitals, raises concerns about the ethical and legal aspects of medical AI, and identifies the problem of insufficient regulation by reviewing the current regulatory status of medical AI in China. In the discussion section, this article mainly focuses on three types of medical damage liability risks in medical AI, namely medical product liability, diagnosis and treatment damage liability, and medical ethics liability. In the determination of medical product liability, the ethical attributes and technological characteristics of medical AI determine its auxiliary positioning, but the auxiliary positioning of medical AI has not eliminated the applicable space of medical product liability, and in the judgment of product defects, the "rational algorithm" standard based on the "rational person" standard should be used to identify AI design defects; In the determination of diagnosis and treatment damage liability, medical AI has not changed the existing doctor-patient relationship structure, but the human-machine collaborative diagnosis and treatment model has intensified the difficulty of identifying doctor's fault, so "reasonable doctor" standards should be adopted, and medical personnel should be given the discretion to reevaluate the negligence of doctors in using AI recommendations. In the case of localizing DeepSeek deployment in hospitals, if misdiagnosis occurs, hospitals and doctors are more likely to bear the diagnosis and treatment damage liability rather than medical product liability. At the same time, the adoption of DeepSeek exacerbates the lack of protection for patients' right to informed consent, which may lead to medical ethical liability. In addition, this article also discusses the data compliance risks of large-scale deployment of DeepSeek in hospitals.

Keywords: Medical AI, deepseek, Legal risk, Medical damage liability, Reasonable doctor standards

Received: 16 Oct 2025; Accepted: 11 Nov 2025.

Copyright: © 2025 Wang and Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ye Wang, yewang0507@126.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.