Your new experience awaits. Try the new design now and help us make it even better

PERSPECTIVE article

Front. Med.

Sec. Regulatory Science

Diverging Regulatory DNA in Adaptive Medical AI: US Agility and EU Accountability in Lifecycle Governance

Provisionally accepted
Jaehyun  LeeJaehyun Lee1Boram  ChoiBoram Choi2Kwunho  JeongKwunho Jeong1Sang Won  SuhSang Won Suh3Hwanseok  RheeHwanseok Rhee4Ju  Han KimJu Han Kim5*Dae-Soon  SonDae-Soon Son4*
  • 1JNPMEDI, Seoul, Republic of Korea
  • 2Onu Institute, Seoul, Republic of Korea
  • 3Hallym University College of Medicine, Chuncheon-si, Republic of Korea
  • 4Hallym University, Chuncheon-si, Republic of Korea
  • 5Seoul National University, Gwanak-gu, Republic of Korea

The final, formatted version of the article will be published soon.

Medical artificial intelligence (AI) is transitioning from static, rule-based systems into adaptive models capable of continuous learning and iterative refinement. Such adaptivity expands the utility and performance of clinical AI systems across diverse patient populations and real-world conditions. However, these properties challenge regulatory paradigms originally designed for fixed-function medical devices. Although the United States and the European Union share goals of ensuring safety, accountability, and trustworthy performance, their regulatory architectures diverge due to underlying legal-philosophical traditions. The United States employs a common-law, evidence-driven approach centered on the Total Product Life Cycle, using predetermined change-control mechanisms and real-world observational data to support iterative improvement under controlled risk. In contrast, the European Union adopts a civil-law, precautionary model operationalized through the Artificial Intelligence Act, the Medical Device Regulation, and the revised Product Liability Directive, emphasizing ex-ante duties, transparency, traceability, and accountability. Understanding these distinct regulatory DNAs is critical for aligning lifecycle governance of adaptive AI across jurisdictions and ensuring safe, context-responsive innovation.

Keywords: accountability, Adaptive AI, AI Act, Lifecycle Governance, MDR, PCCP, Regulatory DNA, RWE

Received: 02 Dec 2025; Accepted: 10 Feb 2026.

Copyright: © 2026 Lee, Choi, Jeong, Suh, Rhee, Kim and Son. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Ju Han Kim
Dae-Soon Son

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.