- Department of Computer Applications, Marian College Kuttikkanam Autonomous, Kuttikkanam, Kerala, India
Introduction: the pilot paradox
Across developing countries, artificial intelligence (AI) in healthcare has become the subject of remarkable optimism. From tuberculosis detection algorithms to AI-powered maternal health monitoring, new tools are constantly being tested in hospitals, primary health centers, and community health programs (1, 2). Governments, multilateral organizations, and donors increasingly support such experiments, seeing AI as a potential shortcut to address chronic shortages of specialists, infrastructure gaps, and inefficiencies in service delivery (3, 4).
However, despite this proliferation of pilots, very few AI initiatives advance to nationwide deployment or long-term institutional adoption. This mismatch—high pilot activity but low policy integration—has been described as the “pilotitis” syndrome of digital health (5). Instead of transforming health systems, many AI projects become isolated, donor-driven, and unsustainable. Hospitals may run a pilot for a year with external support, but once funding ends, servers go offline, trained staff move on, and policymakers shift priorities (6). The technology itself may be promising, but the enabling ecosystem remains weak.
This paradox reveals a systemic problem. While researchers and start-ups focus on demonstrating algorithmic accuracy and efficiency gains, governments face the harder question: How can AI be responsibly and sustainably embedded into complex, resource-constrained health systems? Answering this question requires moving beyond narrow evaluations of technical feasibility toward a broader analysis of governance, trust, policy frameworks, and system readiness (1, 5).
This article critically reflects on why AI interventions remain stuck at the pilot stage in developing countries, despite their apparent promise. Drawing from case experiences in Brazil, India, and selected sub-Saharan African countries such as South Africa, Kenya, and Rwanda, it analyzes recurring barriers that hinder scaling. It then proposes a five-pillar roadmap for transforming successful pilots into sustainable policy-backed programs. The argument is simple: scaling AI in healthcare is less about proving algorithms work, and more about building health systems that can work with algorithms.
Barriers to scaling AI in developing countries
The World Health Organization (WHO) identifies similar obstacles in its primary guidance on AI for health. The WHO Ethics and Governance of AI for Health report highlights challenges such as poor data quality, algorithmic bias, and unclear accountability, all of which make AI harder to scale than conventional digital health tools (7). The WHO Regulatory Considerations on AI for Health further stresses the need for documentation, transparency, and post-market monitoring—requirements that many pilots in developing countries fail to meet (8). WHO's recent guidance on large multimodal AI models adds that without explainability and local validation, AI systems risk unreliable performance in diverse populations (9). These primary sources reinforce that the scaling barriers seen in Brazil, India, and sub-Saharan African countries are part of a wider global pattern.
Although many of the structural constraints discussed in this paper are also relevant to digital health more broadly, AI introduces a distinct and more demanding layer of complexity. AI tools require large, high-quality datasets, reliable data pipelines, mechanisms for explainability, and continuous model updating—needs that far exceed those of routine telemedicine or electronic record systems. Challenges such as model drift, algorithmic bias, and unclear liability pathways make AI considerably more difficult to scale within low-resource health systems. Therefore, while digital health infrastructure forms the foundation, the barriers outlined below are amplified in the context of AI, which imposes stricter requirements for data governance, computational capacity, and institutional trust.
• Fragmented Digital Ecosystems: A major challenge lies in the lack of interoperability across health information systems. For instance, Brazil's Unified Health System (SUS) historically operated with multiple government and private electronic health record (EHR) platforms that did not communicate with one another. Only recently did collaborative initiatives begin integrating municipal and federal data platforms. Without standardized data structures, AI tools cannot operate seamlessly across facilities, leading to duplication and inefficiency (10, 11). Similarly, in several sub-Saharan African countries—particularly Kenya, South Africa, and Rwanda—vertical health programs (HIV, TB, maternal health) each deploy their own digital tools. AI pilots often integrate with one vertical system but not with the broader national architecture, limiting scalability (12, 13).
• Dependence on Donor Funding: AI pilots in the Global South are frequently initiated and sustained by donor or NGO funding, which prioritizes innovation but rarely guarantees long-term financing. Once grants end, governments often lack budgetary space to absorb the costs of licensing, cloud infrastructure, or technical support. This creates a cycle where pilots demonstrate feasibility but collapse due to financial unsustainability (14). For example, AI-based diagnostic imaging tools piloted in countries such as Kenya and Rwanda showed strong results for TB detection, but when donor funding ended, hospitals struggled to pay for recurring software subscriptions and data storage (15–17).
• Weak Policy and Regulatory Frameworks: Scaling AI requires clear governance structures—who is accountable when AI makes a diagnostic error? How should liability be shared between developers, clinicians, and hospitals? In many developing countries, such questions remain unresolved. Regulatory frameworks for AI in healthcare are either nascent or absent, creating uncertainty for policymakers and reluctance among clinicians to adopt AI systems (18, 19). Data protection laws also vary widely. While countries like Brazil have robust frameworks (LGPD), many others lack clear guidelines on data storage, sharing, and cross-border flows, raising ethical and legal concerns (20).
• Low Levels of Trust and Acceptance: Trust is critical for adoption. Yet, clinicians often perceive AI as a threat to professional autonomy, fearing it may replace rather than augment their expertise. Patients, on the other hand, may distrust algorithms trained on foreign datasets that do not reflect local populations. Without transparency and explainability, AI tools risk rejection, even when technically accurate. Evidence from the UK's NHS RADICAL study showed that explainability and clinician confidence were key to adoption. These concerns are magnified in resource-limited contexts where misdiagnosis can have severe consequences (21, 22).
• Inadequate Infrastructure: Successful AI deployment requires stable internet, reliable electricity, robust servers, and secure data storage. Many rural hospitals in developing countries lack these basics. Without such infrastructure, even the most sophisticated AI system cannot function reliably. As Byberg and Crimi note, hospitals often rush to adopt AI without addressing foundational gaps, leading to poor performance and wasted resources (23).
• Fragmented Political Commitments: Scaling is also constrained by federal governance structures, political turnover, and bureaucratic fragmentation. In countries where health responsibilities are shared across national and subnational governments, achieving consensus on digital strategies is challenging. For example, Brazil's data federation required political will across municipal and federal levels, without which integration would not have been possible (2, 14).
Lessons from case studies
• Brazil: Progress in Interoperability: Brazil's recent experience integrating primary and hospital data systems demonstrates that political commitment, technical architecture, and governance alignment are critical for scaling. By establishing data federations across municipal and federal levels, Brazil created a foundation for AI applications in predictive analytics and care coordination. However, the process was slow, resource-intensive, and dependent on continuous political support (24).
• India: Mixed Record in Digital Health: India's ambitious push for digital health, through initiatives like Ayushman Bharat Digital Mission (ABDM), has created infrastructure for interoperability (25–27). Yet, AI pilots in radiology and maternal health often remain fragmented, operating in silos. The scale challenge lies in aligning state-level health systems with federal frameworks and ensuring private sector innovations integrate with public health priorities (28, 29).
• Sub-Saharan Africa—The Pilot Graveyard: In countries such as South Africa, Kenya, and Rwanda, AI pilots for HIV, TB, and maternal health abound. Many demonstrate technical success but collapse post-pilot due to financing gaps and poor alignment with national strategies. A WHO report on tuberculosis CAD software highlights promise in autonomous AI for low-resource settings, yet long-term sustainability remains uncertain without integration into government budgets (30, 31).
A roadmap for scaling AI in developing countries
Overcoming “pilotitis” in developing countries requires a systemic approach that directly addresses the barriers identified earlier—fragmented ecosystems, donor dependence, weak governance, limited trust, infrastructural gaps, and political complexity. The first pillar, policy alignment, demands that AI initiatives be embedded within national digital health strategies rather than remaining isolated experiments. Many pilots fail because they are conceived outside government priorities, creating misalignment during scale-up. Brazil's early challenges with integrating municipal and federal systems illustrate this: AI-enabled analytics in primary care only progressed once policymakers aligned digital health goals across levels of government. Similarly, in India, several promising radiology AI tools remained stuck in state-level silos until alignment with the Ayushman Bharat Digital Mission clarified shared data and governance expectations. This demonstrates that pilots must show not only technical accuracy but also strategic relevance to health system goals and regulatory clarity around risk, liability, and ethical oversight.
The second pillar, infrastructure and interoperability, tackles the technical bottlenecks that prevent wider deployment. Many AI pilots integrate only with vertical program databases—such as HIV or tuberculosis systems in sub-Saharan Africa—because national EHR platforms lack interoperable architecture. As a result, pilots cannot transition to other departments or states. Investing in foundational digital infrastructure, standardized data formats such as HL7-FHIR, cybersecurity measures, and cloud storage is essential for ensuring that AI tools can operate reliably across hospitals and regions. Brazil's eventual success in linking its primary and hospital systems illustrates that interoperability is a prerequisite for scaling AI-driven predictive analytics and decision support. Without this foundation, pilots remain one-off prototypes, technically impressive but functionally trapped.
The third pillar, sustainable financing, directly responds to the donor-dependency barrier. Many AI pilots collapse when grants end because governments lack budget lines to cover recurring costs such as licensing, cloud hosting, server maintenance, and specialized staff. TB-focused AI diagnostic tools in East Africa provide a stark example: despite high accuracy, several pilots shut down once donor subsidies for subscriptions and data storage expired. To avoid this cycle, financing must shift from short-term innovation grants to hybrid models combining government budgets, public-private partnerships, and value-based financing mechanisms where efficiency gains fund sustainability. Open-source AI models can further reduce costs for low-income settings, creating a financially viable pathway from pilot to scale.
An additional dimension of sustainability in the Global South is the critical importance of locally led and government-financed AI initiatives. When national or subnational governments initiate and fund AI projects, the selected interventions tend to align more closely with local health priorities such as universal health coverage, maternal and child health, and the heavy burden of tropical and neglected diseases. In contrast, donor-driven pilots—often shaped by Global North agendas—may prioritize technologies that do not correspond to domestic health system needs or resource constraints. Who initiates and finances AI projects therefore has profound implications for which problems are addressed and whether solutions are designed for long-term integration into national health strategies. Strengthening local ownership and financing is thus essential for reducing dependency on short-term external grants and ensuring that AI solutions are contextually relevant and sustainable.
The fourth pillar, capacity and workforce development, acknowledges that technological readiness is meaningless without human readiness. Clinicians frequently express concerns about loss of autonomy or unfamiliarity with algorithmic systems, resulting in resistance even when tools are accurate. In India, university students and trainees have reported low awareness of digital health infrastructures, slowing adoption of newer AI tools. Similarly, early deployments of CAD-based TB screening in African contexts revealed that lack of training undermined clinical trust, despite strong technical performance. Investing in digital literacy, AI governance training, and communities of practice can normalize AI use in daily workflows. Medical curricula should integrate foundational knowledge of AI ethics, safety, and limitations to ensure future clinicians enter the health system prepared.
The fifth pillar, trust, transparency, and participation, addresses the cultural and ethical barriers that limit adoption. Clinicians remain wary of opaque “black-box” systems, and patients often distrust algorithms developed using foreign datasets that may not reflect local epidemiology. Studies on explainability in the UK's NHS illustrate that transparency significantly improves clinician confidence; these concerns are magnified in developing countries where misdiagnosis risks are high. Building trust thus requires explainable AI interfaces, clear communication of benefits and limitations, and involvement of both clinicians and patients in co-design processes. Transparency also ensures equity: pilots must demonstrate that algorithms trained on external datasets do not perpetuate bias or worsen disparities. Participation and explanation build the social legitimacy needed for scaling.
Together, these five pillars demonstrate that the failure of AI pilots to scale is not a technical failure but a systems failure. By linking policies, infrastructure, financing, workforce preparation, and trust-building efforts, governments can transform disconnected pilots into sustainable, nationally integrated AI programs. The examples from Brazil, India, and countries such as South Africa, Kenya, and Rwanda reinforce that scaling success depends on addressing structural constraints rather than showcasing isolated technical achievements. A brief snapshot of the proposed five-pillar roadmap, and its explicit linkage to system-level barriers, is presented in Table 1.
Table 1. Summary of the five-pillar policy roadmap for scaling AI in healthcare in developing countries.
Conclusion: moving beyond pilotitis
The promise of AI in healthcare for developing countries is undeniable. Pilots have shown that AI can enhance diagnostics, streamline workflows, and extend scarce clinical capacity. Yet the persistence of the pilot graveyard demonstrates that technical success is not enough. Scaling requires governance, financing, infrastructure, and trust.
Moving from pilot to policy means shifting focus from what AI can do to how health systems can support it sustainably. Without this shift, developing countries risk accumulating a portfolio of impressive but isolated pilot projects, with little lasting impact on population health.
By embedding AI within national strategies, investing in infrastructure, securing financing, building workforce capacity, and ensuring transparency, policymakers can transform AI from a promising experiment into a durable pillar of health systems.
Ultimately, the challenge is not to prove whether AI works, but to create health systems where AI can work responsibly, equitably, and at scale. Only then can the technology fulfill its promise to reduce disparities and strengthen healthcare delivery in the Global South.
Author contributions
JJ: Writing – review & editing, Conceptualization, Writing – original draft.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. Generative AI was used for language editing only. All conceptualizations, methods, analyses, and conclusions are the author(s)’ own.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Goktas P, Grzybowski A. Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. J Clin Med. (2025) 14(5):1605. doi: 10.3390/jcm14051605
2. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci. (2024) 19(1):27. doi: 10.1186/s13012-024-01357-9
3. Zahoor I, Ihtsham S, Ramzan MU, Raza AA, Ali B. AI-Driven Healthcare delivery in Pakistan: a framework for systemic improvement. In: Proceedings of the 7th ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies; New Delhi India: ACM (2024). p. 30–7. doi: 10.1145/3674829.3675058
4. Fuentes C, Rodríguez I, Cajamarca G, Cabrera-Quiros L, Lucero A, Herskovic V, et al. Opportunities and challenges of emerging human-AI interactions to support healthcare in the global south. In: Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing; San Jose Costa Rica: ACM (2024). p. 724–7. doi: 10.1145/3678884.3681834
5. Scarbrough H, Sanfilippo KRM, Ziemann A, Stavropoulou C. Mobilizing pilot-based evidence for the spread and sustainability of innovations in healthcare: the role of innovation intermediaries. Soc Sci Med. (2024) 340:116394. doi: 10.1016/j.socscimed.2023.116394
6. Kumar R, Singh A, Kassar ASA, Humaida MI, Joshi S, Sharma M. Adoption challenges to artificial intelligence literacy in public healthcare: an evidence based study in Saudi Arabia. Front Public Health. (2025) 13:1558772. doi: 10.3389/fpubh.2025.1558772
7. World Health Organization. Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Available online at: https://www.who.int/publications/i/item/9789240084759 (Accessed November 6, 2025).
8. World Health Organization. Regulatory considerations on artificial intelligence for health. Available online at: https://www.who.int/publications/i/item/9789240078871 (Accessed November 6, 2025).
9. Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Available online at: https://www.who.int/publications/i/item/9789240084759?utm_source=chatgpt.com (Accessed November 6, 2025).
10. Mwogosi A, Mambile C. AI Integration in EHR systems in developing countries: a systematic literature review using the TCCM framework. Inf Discov Deliv. (2025). doi: 10.1108/IDD-07-2024-0097/full/html
11. Rathore N, Kumari A, Patel M, Chudasama A, Bhalani D, Tanwar S, et al. Synergy of AI and blockchain to secure electronic healthcare records. Secur Priv. (2025) 8(1):e463. doi: 10.1002/spy2.463
12. Mullankandy S, Mukherjee S, Ingole BS. Applications of AI in electronic health records, challenges, and mitigation strategies. In: 2024 International Conference on Computer and Applications (ICCA); Cairo, Egypt: IEEE (2024). p. 1–7. Available online at: https://ieeexplore.ieee.org/document/10927863/ (Accessed September 3, 2025).
13. Kumar M, Mostafa J. Research evidence on strategies enabling integration of electronic health records in the health care systems of low- and middle-income countries: a literature review. Int J Health Plann Manage. (2019) 34(2):e1016–25. doi: 10.1002/hpm.2754
14. Borines VA, Turi AN, Hedru P. The what, so what, and what now of the AI landscape in emerging economies. In: TENCON 2024 - 2024 IEEE Region 10 Conference (TENCON); Singapore, Singapore: IEEE (2024). p. 804–9. Available online at: https://ieeexplore.ieee.org/document/10902844/ (Accessed September 3, 2025).
15. Onno J, Khan A, Daftary F, David A, M P. Artificial intelligence-based computer aided detection (AI-CAD) in the fight against tuberculosis: effects of moving health technologies in global health. Soc Sci Med. (2023) 327:115949. doi: 10.1016/j.socscimed.2023.115949
16. Clark RA, McQuaid CF, Richards AS, Bakker R, Sumner T, Prŷs-Jones T, et al. The potential impact of reductions in international donor funding on tuberculosis in low- and middle-income countries. medRxiv [Preprint]. medRxiv:e1517-24 (2025). doi: 10.1101/2025.04.23.25326313
17. Mandal S, Nair S, Sahu S, Ditiu L, Pretorius C. A deadly equation: the global toll of US TB funding cuts. medRxiv. (2025) 5. doi: 10.1101/2025.03.04.25323340
18. Bottomley D, Thaldar D. Liability for harm caused by AI in healthcare: an overview of the core legal concepts. Front Pharmacol. (2023) 14:1297353. doi: 10.3389/fphar.2023.1297353
19. Naidoo T. Overview of AI regulation in healthcare: a comparative study of the EU and South Africa. South Afr J Bioeth Law. (2024) 17:e2294. doi: 10.7196/SAJBL.2024.v17i3.2294
20. Kalodanis K, Feretzakis G, Rizomiliotis P, Verykios VS, Papapavlou C, Koutsikos I, et al. Data governance in healthcare AI: navigating the EU AI act’s requirements. In: Mantas J, Hasman A, Zoulias E, Karitis K, Gallos P, Diomidous M, et al., editors. Studies in Health Technology and Informatics. Amsterdam: IOS Press (2025). p. 66–70.
21. Eke CI, Shuib L. The role of explainability and transparency in fostering trust in AI healthcare systems: a systematic literature review, open issues and potential solutions. Neural Comput Appl. (2025) 37(4):1999–2034. doi: 10.1007/s00521-024-10868-x
22. Stevens AF, Stetson P. Theory of trust and acceptance of artificial intelligence technology (TrAAIT): an instrument to assess clinician trust and acceptance of artificial intelligence. J Biomed Inform. (2023) 148:104550. doi: 10.1016/j.jbi.2023.104550
23. Lamem MFH, Sahid MI, Ahmed A. Artificial intelligence for access to primary healthcare in rural settings. J Med Surg Public Health. (2025) 5:100173. doi: 10.1016/j.glmedi.2024.100173
24. MK MV, Sastry NKB, Moonesar IA, Rao A. Predicting universal healthcare through health financial management for sustainable development in BRICS, GCC, and AUKUS economic blocks. Front Artif Intell. (2022) 5:887225. doi: 10.3389/frai.2022.887225
25. Narayan A, Bhushan I, Schulman K. India’s evolving digital health strategy. NPJ Digit Med. (2024) 7(1):284. doi: 10.1038/s41746-024-01279-2
26. Sharma RS, Rohatgi A, Jain S, Singh D. The Ayushman Bharat digital mission (ABDM): making of India’s digital health story. CSI Trans ICT. (2023) 11(1):3–9. doi: 10.1007/s40012-023-00375-0
27. Kamath R, Banu M, Shet N, Jayapriya VR, Lakshmi Ramesh V, Jahangir S, et al. Awareness of and challenges in utilizing the Ayushman Bharat digital mission for healthcare delivery: qualitative insights from university students in coastal Karnataka in India. Healthcare. (2025) 13(4):382. doi: 10.3390/healthcare13040382
28. Sgaier SK, Ramakrishnan A, Wadhwani A, Bhalla A, Menon H, Baer J, et al. Achieving scale rapidly in public health: applying business management principles to scale up an HIV prevention program in India. Healthcare. (2018) 6(3):210–7. doi: 10.1016/j.hjdsi.2017.09.002
29. Mishra US, Yadav S, Joe W. The Ayushman Bharat digital mission of India: an assessment. Health Syst Reform. (2024) 10(2):2392290. doi: 10.1080/23288604.2024.2392290
30. Tran Ngoc C, Bigirimana N, Muneene D, Bataringaya JE, Barango P, Eskandar H, et al. Conclusions of the digital health hub of the transform Africa summit (2018): strong government leadership and public-private-partnerships are key prerequisites for sustainable scale up of digital health in Africa. BMC Proc. (2018) 12(S11):17. doi: 10.1186/s12919-018-0156-3
Keywords: artificial intelligence in healthcare, developing countries, pilotitis, health policy, digital health systems, scalability
Citation: Joseph J (2026) From pilot to policy: why AI health interventions fail to scale in developing countries. Front. Digit. Health 8:1699005. doi: 10.3389/fdgth.2026.1699005
Received: 4 September 2025; Revised: 6 November 2025;
Accepted: 13 January 2026;
Published: 29 January 2026.
Edited by:
Hannah Van Kolfschooten, University of Basel, SwitzerlandReviewed by:
Pramiti Parwani, University of Amsterdam, NetherlandsCopyright: © 2026 Joseph. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jeena Joseph, amVlbmFqb3NlcGgwMDVAZ21haWwuY29t