ORIGINAL RESEARCH article
Front. Digit. Health
Sec. Ethical Digital Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1625747
This article is part of the Research TopicEthical and Legal Implications of Artificial Intelligence in Public Health: Balancing Innovation and PrivacyView all 13 articles
Cautious Optimism: Public Voices on Medical AI and Sociotechnical Harm
Provisionally accepted- 1University of York, York, United Kingdom
- 2Microsoft Research, Cambridge, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Background: Medical-purpose software and Artificial Intelligence ('AI')-enabled technologies ('medical AI') raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm. Methods: Using a qualitative design approach, twenty interviews and/or long-form questionnaires were completed between September and November 2024 with UK participants to explore their perspectives, expectations, and concerns around medical AI adoption and related sociotechnical harm. An emphasis was placed on diversity and inclusion, with study participants drawn from racially, ethnically, and linguistically diverse groups and from self-identified minority groups. A thematic analysis of interview transcripts and questionnaire responses was conducted to identify general medical AI perception and sociotechnical harm. Results: Our findings demonstrate that while participants are cautiously optimistic about medical AI adoption, all participants expressed concern about matters related to sociotechnical harm. This included potential harm to human autonomy, alienation and a reduction in standards of care, the lack of value alignment and integration, epistemic injustice, bias and discrimination, and issues around access and equity, explainability and transparency, and data privacy and data-related harm. While responsibility was seen to be shared, participants located responsibility for addressing sociotechnical harm primarily with the regulatory authorities. An identified concern was risk of exclusion and inequitable access on account of practical barriers such as physical limitations, technical competency, language barriers, or financial constraints. Conclusion: We conclude that medical AI adoption can be better supported through identifying, prioritising, and addressing sociotechnical harm including the development of clear impact and mitigation practices, embedding pro-social values within the system, and through effective policy guidance intervention.
Keywords: Public perspectives, medical devices regulation, AI-enabled medical devices, healthcare, Medical AI, sociotechnical harm, socio-ethical and cultural requirement, Prosocial values
Received: 09 May 2025; Accepted: 10 Sep 2025.
Copyright: © 2025 Townsend, Hodge, Richardson, Calinescu and Arvind. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Beverley Alice Townsend, University of York, York, United Kingdom
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.