ORIGINAL RESEARCH article
Front. Public Health
Sec. Digital Public Health
This article is part of the Research TopicEthical Challenges of AIView all articles
Ethical Challenges in Scene Understanding for Public Health AI
Provisionally accepted- Northern Theater Command Postgraduate Training Base of Jinzhou Medical University General Hospital, Shenyang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Integrating AI into public health introduces complex ethical challenges, especially in scene understanding, where automated decisions affect socially sensitive contexts. In contexts requiring heightened sensitivity, including disease surveillance, patient monitoring, and behavioral analysis, the interpretability, fairness, and accountability of AI systems are crucial considerations. Conventional approaches to ethical modeling in AI often impose normative concerns as external constraints, resulting in post hoc evaluations that fail to address ethical tensions in real-time. These deficiencies are especially problematic in public health applications, where decision-making must safeguard privacy, foster social trust, and accommodate diverse moral frameworks. To address these limitations, this study introduces a methodological framework that integrates ethical reasoning into the learning architecture itself. The proposed model, VirtuNet, incorporates deontic constraints and stakeholder preferences within its computational pathways, embedding ethical admissibility into both representation and decision processes. Moreover, a dynamic conflict-resolution mechanism, Reflective Equilibrium Strategy, is developed to adapt policy behavior in response to evolving ethical considerations, facilitating principled moral deliberation under uncertainty. This dual-structured approach, combining embedded normative templates with adaptive strategic mechanisms, ensures that AI behaviors align with public health values such as transparency, accountability, and privacy preservation. Experimental evaluations reveal that the framework achieves superior ethical alignment, reduced norm violations, and improved adaptability compared to traditional constraint-based systems. By bridging formal ethics, machine learning, and public interest imperatives, this work establishes a foundation for deploying ethically resilient AI in public health scenarios demanding trust, legality, and respect for human dignity.
Keywords: ethical reasoning, Public Health AI, scene understanding, Deontic constraints and stakeholder preferences, Reflectiveequilibrium strategy
Received: 28 Oct 2025; Accepted: 03 Nov 2025.
Copyright: © 2025 Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Zihan Zhao, oxbc93436@outlook.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.