EDITORIAL article
Front. Health Serv.
Sec. Implementation Science
This article is part of the Research TopicPlace-based Evidence for Clinical Artificial Intelligence ImplementationView all 5 articles
Editorial: Place-based Evidence for Clinical Artificial Intelligence Implementation
Provisionally accepted- 1University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- 2University of Birmingham, Birmingham, United Kingdom
- 3KPMG, London, United Kingdom
- 4University of Leicester, Leicester, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
There has been a sustained and broadly held optimism around the potential of clinical artificial intelligence (AI) to improve on the quality, efficiency and reach of healthcare services. This optimism continues to be signalled by policy makers, manufacturers and researchers. [1][2][3] Despite this general tide of optimism, examples of scaled adoption of specific clinical AI technologies and the forms of evidence traditionally considered most valuable in informing those adoption decisions remain scarce. 4 The value of such evidence is challenged by the context-sensitivity of clinical AI's value proposition and the limited availability of skills and guidance to enable local stakeholders to make informed decisions. In addition, challenges are presented by the complexity underpinning the forms of risk that accompany AI's potential benefits. 5 Evidence that permits the evaluation of clinical risk is required, as well as evidence highlighting legal, financial, operational and reputational risks. These factors all contribute to the persistent implementation gap around clinical AI and the stubborn but vital challenge of evaluating these interventions in the sociotechnical context in which they are embedded and used. 6 This research topic presents a collection of articles which blur the margins of theory and practice to support decision makers as they evaluate clinical AI interventions for local implementation. The work presented does not seek to directly create the evidence that local decision makers require. This is because of an acceptance that the varied forms of evidence required to demonstrate that the risks of an AI innovation are outweighed by its benefits have limited generalisability beyond the setting in which the evidence was generated. Instead, these articles aim to share generalisable and pragmatic approaches to create that evidence, ensuring place-based meaning. Authors take frameworks from governance, research and industry disciplines and apply varied methodological approaches to produce insights and tools which are actionable for the individuals who are responsible for implementing AI in real-world healthcare services.In our first article, Nair et al report findings from secondary research methods and primary qualitative research to explore the activities typical to the implementation of clinical AI. They evaluate and refine an established theoretical framework to provide a roadmap of these activities for healthcare provider organisation. Their analysis highlights the range of individuals, the process and the importance of collective effort in bringing clinical AI into practice whilst showing the enduring relevance of insights from other forms of innovation captured in a theoretical framework. This is followed by McDonald et al's systematic review, which presents a methodology in the form of a Target Product Profile (TPP) designed to facilitate collaboration on clinical AI innovation, not only within a healthcare provider organisation, but also for developers of AI solutions from industry and elsewhere. This work identifies and consolidates TPPs from across digital health technologies to establish a practical framework that enables current and future adopters of clinical AI to signal their needs to developers and to evaluate potential technologies in a holistic and structured way. Our final two contributions offer insights from policy research exploring the challenges and opportunities posed by existent practical frameworks for clinical AI. These frameworks address legal and governance aspects of healthcare innovation which were explored through workshops with crosssector and multidisciplinary participation. The article from Evans et al examines procurement frameworks. Recommendations for practitioners in various roles across the healthcare system are presented to unlock the opportunities these frameworks offer, addressing key barriers that hinder the scale and spread of clinical AI innovation. Gilbert et al's article shifts the focus from commercial governance to information governance. Drawing upon a combination of clinical, academic, and industry perspectives, the authors reflect on the factors that influence the efficiency and effectiveness of the Data Protection Impact Assessment framework for AI research and practice in the UK, identifying both challenges and potential solutions. In so doing, their analysis highlights the need for training initiatives, communities of practice and the standardisation of governance processes and structures across NHS Trusts.The above contributions offer diverse approaches to the varied implementation challenges local decision makers face with clinical AI. This encompasses i) methodological diversity (evidence synthesis, qualitative research, co-design), ii) domain diversity (operations, commercial, information governance, innovation) and iii) philosophical diversity (theoretical frameworks, practical frameworks). Collectively, the articles illustrate the interface between implementation research and practice and the potential value of bridging the two. Insights from this topic and related work, both within and beyond academic literature, help to mobilise knowledge from a broader empirical and theoretical base to address the challenge of implementing a clinical AI technology in a specific context. This challenge presents real problems. Significant resources and good will are expended by actors designing and executing pilots in isolation, while their understanding of the requirements and how to evidence success or failure remains limited. The repeated and failed attempts to locally evidence these requirements present a significant threat to the reputation of AI innovations and a waste of scarce resources. 7 In turn, this limited understanding can lead to imprecise estimates of the costs of clinical AI implementation for adopter organisations. A recent Health Technology Assessment estimated the implementation costs for a specific AI intervention for fracture detection on radiographs to vary between £1,200 and £120,000. 8 This lack of precision threatens to completely undermine the viability of decision making under the budgetary constraints of a single department or organisation.The insights presented here aim to help practitioners in shifting their focus from the novelties of clinical AI to the established knowledge and theory which underpin its successful implementation. This approach supports practitioners in anticipating and managing the challenges of clinical AI implementation, determining what kinds of evidence need to be generated and how that can be done. It does not discard the pursuit of generalisability but focuses on the generalisability of methodologies for place-based evidence generation rather than the evidence itself. In summary, this collection highlights the importance of interdisciplinarity in safe, effective and efficient clinical AI implementation. The articles presented aim to provide practical tools and insights that enable stakeholders within adopting organisations to engage in collective sensemaking to facilitate successful implementation. 9
Keywords: Frameworks, Evidence - based medicine, implementation, artificial intelligence - AI, Service evaluation
Received: 31 Oct 2025; Accepted: 03 Nov 2025.
Copyright: © 2025 Hogg, Gunatilleke and Maniatopoulos. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Henry David Jeffry Hogg, j.hogg.1@bham.ac.uk
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.