Your new experience awaits. Try the new design now and help us make it even better

PERSPECTIVE article

Front. Public Health, 24 September 2025

Sec. Digital Public Health

Volume 13 - 2025 | https://doi.org/10.3389/fpubh.2025.1680630

This article is part of the Research TopicDigital Health MisinformationView all articles

A surge of AI-driven publications: the impact on health professionals and potential mitigating solutions

Guglielmo ArzilliGuglielmo Arzilli1Elisa Di Maggio
Elisa Di Maggio2*Luigi De AngelisLuigi De Angelis1Francesco BaglivoFrancesco Baglivo1Elena Savoia,Elena Savoia3,4Gaetano Pierpaolo PriviteraGaetano Pierpaolo Privitera1Caterina RizzoCaterina Rizzo1
  • 1Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
  • 2Hygiene Unit, Policlinico Foggia Hospital, Department of Medical and Surgical Sciences, University of Foggia, Foggia, Italy
  • 3Emergency Preparedness Research Evaluation and Practice Program (EPREP), Division of Policy Translation and Leadership Development, Harvard T.H. Chan School of Public Health, Boston, MA, United States
  • 4Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States

The rapid development of generative AI is reshaping scientific communication, particularly in medicine and public health. Since the release of ChatGPT in 2022, Large Language Models have become widely accessible, supporting manuscript editing, statistical analysis, and rapid evidence synthesis. However, this surge in AI-generated content raises concerns about the quality, reliability, and ethical implications of scientific publishing. Increased reliance on AI-driven authoring tools could exacerbate an “infodemic”—an overwhelming flood of potentially unreliable or misleading information. This risk is exacerbated by the prevailing “publish or perish” culture, which prioritizes publication volume over meaningful contributions. In addition, the proliferation of academic journals, especially those that charge high publication fees, deepens inequalities in global health research and limits access for low-income countries. Documented cases of fabricated articles and false authorship in predatory journals highlight how AI can be misused, threatening evidence-based medicine and influencing healthcare decisions. To address these challenges, regulatory frameworks, ethical guidelines, and widespread digital literacy training for researchers and health professionals are critical. A balanced approach—harnessing the efficiency of AI while safeguarding scientific integrity—is needed to prevent an AI-driven infodemic and ensure the equitable, high-quality dissemination of medical knowledge.

Introduction

Ioannidis et al. (1) observed that thousands of scientists had a publication rate of approximately one paper every 5 days. With the advent of generative artificial intelligence (generative AI), such rate is likely to increase (2). In a world astonished by the potential of ChatGPT (a chatbot powered by GPTs, Generative Pre-trained Transformer), released by OpenAI in late 2022 (3), it seems effortless to generate high-quality written text using focused prompts. Since its launch, this innovative and user-friendly tool has rapidly reached a large portion of the public, and its Large Language Models (LLMs) applications, previously confined to a limited number of specialists, are now accessible to everyone.

In medical scientific writing, generative AI has shown its potential. The ease with which it can help produce and improve an article is astonishing (4). The applications are many: from programming codes to support for statistical analysis to editing manuscripts (5) to being a valuable tool for generating evidence synthesis and meta-analyses, which, with the integration of AI technologies, can be completed in a matter of days instead of months (6). This astounding use makes one look forward to enhancing research capabilities and better synthesizing applicable information across professional fields.

As we enhance the ability and speed by which we can synthesize and write about research, an important question arises: will this affect the number and quality of publications and the ability to utilize them effectively?

This paper explores the implications of the increasing use of generative AI in medical writing, focusing on its effects on health professionals and the potential for an AI-driven infodemic. By addressing existing gaps in the literature, the study provides a critical analysis of how these technological developments challenge the quality, equity, and reliability of medical knowledge production while proposing solutions to mitigate potential risks.

Current status of literature production

In recent years, there has been a significant increase in the number of scientific articles submitted to academic journals. Clark (7) pointed out that journal submissions have increased by more than 60%, with nearly a quarter of a million submissions in the first wave of the COVID-19 pandemic. For example, BMJ journal submissions rose by almost 20% in 2020 compared to the previous year. While the presence of a large-scale emergency, such as a pandemic, is expected to increase the demand and production for scientific writing, this trend cannot be attributed to the pandemic effect alone. As Esther Landhuis (8) noted, the production of scientific papers increased by 8–9% annually over the past few decades. The growing need to accommodate this large number of publications has led to the creation of new academic journals, often charging publication fees. The number of scholarly journals has risen significantly, from just under 35,000 in 2010 to over 46,000 in 2020 (9), including an increase in the number of medical journals adopting an open-access policy (10). As a consequence, this may lead to an overload of the peer review process and represents a challenge for researchers who are competent in a specific field but lack the time to complete the reviews (1012). Under typical circumstances, it can take several months to complete a peer review process (13), but considering the growing amount of submissions, reviewers may improperly use generative AI to perform peer review on their behalf (14). This practice may undermine the quality of the scientific publication process and introduce errors into the published literature, thus contributing to the mechanisms of an AI-driven infodemic of scientific papers filled with unchecked and unreliable information. In addition, non-peer reviewed knowledge dissemination channels, such as pre-prints portals, have found ample space further impacting the information ecosystem. When amplified by social media and digital platforms, this overflow of preliminary or low-quality evidence may contribute to the spread of health misinformation, with potential downstream effects on clinical practice and population health (15). Although researchers argue that even non-peer reviewed articles may include important findings and influence scientific progress in a positive manner (16), it is reasonable to assume that practitioners may feel overwhelmed when deciding which source to trust when dealing with such an infodemic of scientific production.

The concept of infodemic has been widely described in scientific literature (17) as “the rapid spread of large amounts of sometimes conflicting or inaccurate information that can impede the ability of individuals, communities, and authorities to protect health and effectively respond in a crisis” (18). This phenomenon inevitably affects the quality of the information produced and the extent to which it can impact practitioners and policymakers’ decision-making, due to an inevitable mix of reliable and unreliable information that can be further expanded by generative AI, a phenomenon named AI-driven infodemic (19).

The publish or perish culture

This surge of publications is also linked to the mounting pressure to produce and disseminate research findings at any cost. This pressure, often referred to as the “publish or perish” culture, underscores the imperative for academic researchers to meet bibliometric benchmarks, secure research funding, and attain more prestigious academic positions (20). Evidence suggests that such pressure can shift the focus from producing high-quality, impactful research to maximizing publication numbers, sometimes leading to fragmented or redundant studies, selective reporting, and other questionable research practices (21, 22). Generative AI can amplify this trend by accelerating the production of manuscripts—whether based on primary/secondary data or commentaries—driven more by the need to publish than by genuine improvements in methodological rigor or to fill a real gap in knowledge (2). This may further increase low-quality publications, complicating the identification of reliable evidence. In such context, LLMs become catalysts of the “publish or perish” culture, becoming tools that support the researchers’ needs for visibility.

The risk of healthcare decisions driven by unreliable scientific production

LLMs’ vast amount of information might compromise the peer review system and the validity of scientific outputs. On this premise, inaccurate content could be inserted by mistake or intentionally for fraudulent purposes. A notable example is a proof-of-concept study showing the capacity of ChatGPT to generate fabricated but highly convincing medical articles, complete with references and data, which could easily have deceived readers and even expert reviewers (23). Another striking case occurred when an AI-generated article was published in a predatory journal under the false authorship of a well-known academic, complete with a valid DOI, despite the fact that the researcher had never submitted or written the work (24). Such incidents highlight how AI can be misused to create fraudulent outputs, which, once disseminated, may acquire unwarranted credibility and influence. For example, unreliable information could be used to favor one pharmaceutical product over another, steering scientific debate or prescription trends. In the era of LLMs, the speed and scale at which incorrect claims about a pharmaceutical product or medical procedure could be generated and disseminated—potentially perceived as credible—greatly increases the risk of misguided healthcare decisions and the spread of mis-disinformation, potentially amplified by an AI-fueled and unverified infodemic (19). Recent evidence shows that digital misinformation during the pandemic directly influenced harmful behaviors, ranging from the inappropriate use of hydroxychloroquine to reduced vaccine uptake. This demonstrates the concrete impact of health misinformation on population-level outcomes (25, 26).

Who would be most impacted?

As with all types of threats, there are vulnerable groups that will be most at risk from the effect of an infodemic. In low-income countries, health professionals may lack the resources to manage an infodemic of scientific literature (27). For example, they may find it more difficult to publish in higher-quality journals, increasing their exposure to less reputable or predatory journals (28, 29). Furthermore, the shift toward open-access publishing has also created inequalities for authors in economically disadvantaged areas, who are often unable to afford article-processing charges (APCs) (30). Private initiatives such as “Research4Life” are focused on finding solutions to this issue, providing researchers in low-income countries with free access to paid journals (31). However, it is questionable whether these programs can fill the inequality gap in a context lacking medical-scientific training and the ability to critically appraise the scientific literature (32). The advent of AI-driven infodemics threatens to widen the gap between health professionals with varying degrees of susceptibility to this overload of information, exacerbating the already existing disparity in clinical and research skills across countries and undermining equity in producing and using scientific findings. Another concern is the potential interference of generative AI in this field of knowledge reproduction, including academic and non-academic teaching, which ultimately impacts trainees’ critical thinking abilities and independent problem-solving skills. The use of AI for content generation could significantly impact professionals and trainees, as an optimally packaged scientific output might discourage further analysis and investigation of the subject matter being researched and studied. One foreseeable consequence is deskilling, leaving professionals unable to navigate entire publication processes independently. This could undermine healthcare quality and evidence-based medicine in an already weak professional environment (33), as the inability to critically appraise and produce scientific evidence may lead to the uncritical adoption of outdated, biased, or low-quality sources, ultimately resulting in inappropriate clinical decisions and reduced patient safety.

The future is now: it is time to act!

To date, numerous attempts have been made to replicate original articles using non-human authors. Many readers and scientists have already recognized typical AI writing patterns in early reports (34). At the same time, AI-generated articles and images have been retracted (35). As a result, several policies are emerging to regulate the use of AI in academia, with publishing group policies requiring disclosure of AI use and full author accountability. However, the impact of these policies on applicability and compliance remains unknown. Despite developing detectors that recognize AI-generated language (36), there are no reliable standards for accurately detecting AI-written text. Furthermore, whether a tool can detect text produced by current LLMs, let alone more sophisticated future versions, is still debated. This situation underscores the need for clear guidance in navigating the vast and complex “mare magnum” of information.

In the context of this AI-driven infodemic, researchers and practitioners must identify reliable sources of information to ensure the high quality of their work. As demonstrated by the use of hydroxychloroquine during the COVID-19 pandemic, the impact of non-peer-reviewed or low-quality literature may lead to the utilization of inappropriate therapies (37). Given this premise, it is evident that an increase in the number of publications can lead to an imbalance between the supply of information and the practitioners’ ability to extract useful content (38). This phenomenon raises the question of how to select which articles to use. One potential solution is to rely on systematic reviews and meta-analyses, which consolidate the evidence from multiple sources into one article. However, even these types of studies may be subject to the same pitfalls in an infodemic context. Various reviews may exist on the same topic, with subtle differences between them. In addition, reviews may be inconclusive due to the poor quality of the articles included (39).

Conclusion

There is a balance to be struck between the use of AI to rapidly produce scientific literature and the ability to select, critically appraise, and utilize the production of literature by clinicians, public health practitioners, and policymakers. From this perspective, it may be helpful to consider strategies that can be implemented. Regulation of the use of AI for scientific writing can enable its use to be standardized and good practices to be addressed on a common front, providing standards of action that can improve its results and enable its full potential to be exploited. However, regulation alone cannot be the only way to manage such a disruptive phenomenon. This is why it may be useful to adopt a more holistic approach, recalibrating training methods in a global sense. This recalibration should encompass not only the technical aspect of the quality of information but also the ethical considerations that come with the production of information and how health professionals and researchers are trained. The ethical use of AI in scientific literature can be ensured only when health professionals possess the skills to critically assess both its potential risks and its substantial benefits. While numerous educational programs and scholarly discussions on the ethics and applications of AI in healthcare are already available (40), a considerable proportion of professionals remain outside the reach of such training initiatives. Therefore, educational opportunities should be designed to reach all professionals, regardless of seniority or institutional affiliation, and should be made freely accessible. Ensuring open-access and cost-free training is crucial to avoid inequalities in digital literacy and to guarantee widespread adoption of ethical and responsible practices. The absence of adequate digital literacy and ethical guidance among these individuals increases the likelihood of inappropriate or harmful use of AI tools. In parallel, academic institutions are encouraged to promote transparent ethical guidelines on the acceptable use of generative AI in manuscript drafting, ensuring that disclosure of AI support becomes a standardized practice. Journals should establish clear editorial policies to verify compliance and provide examples of best practices. At the same time, researchers must be trained to integrate AI tools responsibly, using them as support rather than substitution for scientific reasoning and authorship accountability. In contrast, there is a pressing need for tools that assess the quality of a publication, the data it contains, the reproducibility of the research itself, and the impact it has on clinical practice or policymaking to support quality, soundness, and ethical principles rather than mere productivity.

Author contributions

GA: Conceptualization, Writing – original draft, Writing – review & editing, Validation, Visualization. EM: Conceptualization, Writing – original draft, Writing – review & editing, Validation, Visualization. LA: Conceptualization, Writing – original draft, Writing – review & editing. FB: Conceptualization, Writing – original draft, Writing – review & editing. ES: Funding acquisition, Supervision, Validation, Writing – original draft, Writing – review & editing. GP: Supervision, Validation, Writing – original draft, Writing – review & editing. CR: Supervision, Validation, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This manuscript was partially supported by the project entitled “Reducing populations’ vulnerabilities to mis-disinformation related to scientific content,” award #G598 to Harvard University from the NATO Science for Peace and Security Program.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Ioannidis, JPA, Klavans, R, and Boyack, KW. Thousands of scientists publish a paper every five days. Nature. (2018) 561:167–9. doi: 10.1038/d41586-018-06185-8

Crossref Full Text | Google Scholar

2. Conroy, G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature. (2023) 622:234–6. doi: 10.1038/D41586-023-03144-W

PubMed Abstract | Crossref Full Text | Google Scholar

3. ChatGPT. Available online at: https://chat.openai.com/ (Accessed 11 April 2024). (2024).

Google Scholar

4. Khalifa, M, and Albadawy, M. Using artificial intelligence in academic writing and research: an essential productivity tool. Comput Methods Programs Biomed Update. (2024) 5:100145. doi: 10.1016/J.CMPBUP.2024.100145

Crossref Full Text | Google Scholar

5. Giglio, ADel, and da Costa, MUP. The use of artificial intelligence to improve the scientific writing of non-native English speakers. Rev Assoc Med Bras (2023) 69:20230560. doi: 10.1590/1806-9282.20230560

Crossref Full Text | Google Scholar

6. Giunti, G, and Doherty, CP. Cocreating an automated mHealth apps systematic review process with generative AI: design science research approach. JMIR Med Educ. (2024) 10:949. doi: 10.2196/48949

PubMed Abstract | Crossref Full Text | Google Scholar

7. Clark, J. How covid-19 bolstered an already perverse publishing system. BMJ. (2023) 380:689. doi: 10.1136/BMJ.P689

PubMed Abstract | Crossref Full Text | Google Scholar

8. Landhuis, E. Scientific literature: information overload. Nature. (2016) 535:7612. doi: 10.1038/nj7612-457a

PubMed Abstract | Crossref Full Text | Google Scholar

9. Wordsrated. Number of academic papers published per year – WordsRated. (2024). Available online at: https://wordsrated.com/number-of-academic-papers-published-per-year/ (Accessed 31 March 2024)

Google Scholar

10. Booth, CM, Ross, JS, and Detsky, AS. The changing medical publishing industry: economics, expansion, and equity. J Gen Intern Med. (2023) 38:3242–6. doi: 10.1007/S11606-023-08307-Z/FIGURES/1

Crossref Full Text | Google Scholar

11. Künzli, N, Berger, A, Czabanowska, K, Lucas, R, Madarasova Geckova, A, Mantwill, S, et al. I do not have time—is this the end of peer review in public health sciences? Public Health Rev. (2022) 43:1605407. doi: 10.3389/PHRS.2022.1605407

PubMed Abstract | Crossref Full Text | Google Scholar

12. Van Noorden, R. Open access: the true cost of science publishing. Nature. (2013) 495:426–9. doi: 10.1038/495426A

PubMed Abstract | Crossref Full Text | Google Scholar

13. Huisman, J, and Smits, J. Duration and quality of the peer review process: the author’s perspective. Scientometrics. (2017) 113:633–50. doi: 10.1007/S11192-017-2310-5/TABLES/7

Crossref Full Text | Google Scholar

14. Liang, W, Izzo, Z, Zhang, Y, Lepp, H, Cao, H, Zhao, X, et al. Monitoring AI-modified content at scale: a case study on the impact of ChatGPT on AI conference peer reviews. In: Proceedings of International onference on Machine Learning. (2024) 235:29575–29620.

Google Scholar

15. Swire-Thompson, B, and Lazer, D. Reducing health misinformation in science: a call to arms. Ann Am Acad Pol Soc Sci. (2022) 700:124–35. doi: 10.1177/00027162221087686

PubMed Abstract | Crossref Full Text | Google Scholar

16. Van Noorden, R. The science that’s never been cited. Nature. (2017) 552:162–4. doi: 10.1038/D41586-017-08404-0

PubMed Abstract | Crossref Full Text | Google Scholar

17. Rubinelli, S, Purnat, TD, Wihelm, E, Traicoff, D, Namageyo-Funa, A, Thomson, A, et al. WHO competency framework for health authorities and institutions to manage infodemics: its development and features. Hum Resour Health. (2022) 20:35. doi: 10.1186/S12960-022-00733-0

Crossref Full Text | Google Scholar

18. Nicholson, A, and Haag, T. Navigating infodemics and building trust during public health emergencies. In: Proceedings of a workshop-in brief. (2023)1–15.

Google Scholar

19. De Angelis, L, Baglivo, F, Arzilli, G, Privitera, GP, Ferragina, P, Tozzi, AE, et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front Public Health. (2023) 11:120. doi: 10.3389/FPUBH.2023.1166120

PubMed Abstract | Crossref Full Text | Google Scholar

20. Fanelli, D. Do pressures to publish increase scientists’ Bias? An empirical Support from US states data. PLoS One. (2010) 5:10271. doi: 10.1371/JOURNAL.PONE.0010271

PubMed Abstract | Crossref Full Text | Google Scholar

21. Haven, TL, Bouter, LM, Smulders, YM, and Tijdink, JK. Perceived publication pressure in Amsterdam: survey of all disciplinary fields and academic ranks. PLoS One. (2019) 14:e0217931. doi: 10.1371/JOURNAL.PONE.0217931

PubMed Abstract | Crossref Full Text | Google Scholar

22. Cui, Y, and Liu, X. A questionnaire survey on Chinese translation and interpreting scholars’ publication pressure and its impact on research quality and publishing ethics. J Empir Res Hum Res Ethics. (2023) 18:161–9. doi: 10.1177/15562646231164112

PubMed Abstract | Crossref Full Text | Google Scholar

23. Májovský, M, Černý, M, Kasal, M, Komarc, M, and Netuka, D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: pandora’s box has been opened. J Med Internet Res. (2023) 25:e46924. doi: 10.2196/46924

Crossref Full Text | Google Scholar

24. Spinellis, D. False authorship: an explorative case study around an AI-generated article published under my name. Res Integr Peer Rev. (2025) 10:1–8. doi: 10.1186/S41073-025-00165-Z

PubMed Abstract | Crossref Full Text | Google Scholar

25. Nascimento, Israel Júnior Borgesdo, Pizarro, Ana Beatriz, Almeida, Jussara M, Azzopardi-Muscat, Natasha, Gonçalves, Marcos André, and Björklunde, Maria. Infodemics and health misinformation: a systematic review of reviews. Bull World Health Organ (2022). 100:544–561. doi: 10.2471/BLT.21.287654

Crossref Full Text | Google Scholar

26. Kisa, S, and Kisa, A. A comprehensive analysis of COVID-19 misinformation, public health impacts, and communication strategies: scoping review. J Med Internet Res. (2024) 26:e56931. doi: 10.2196/56931

Crossref Full Text | Google Scholar

27. Dash, S, Parray, AA, De Freitas, L, Mithu, MIH, Rahman, MM, Ramasamy, A, et al. Combating the COVID-19 infodemic: a three-level approach for low and middle-income countries. BMJ Glob Health. (2021) 6:e004671. doi: 10.1136/BMJGH-2020-004671

PubMed Abstract | Crossref Full Text | Google Scholar

28. Iyandemye, J, and Thomas, MP. Low income countries have the highest percentages of open access publication: a systematic computational analysis of the biomedical literature. PLoS One. (2019) 14:e0220229. doi: 10.1371/JOURNAL.PONE.0220229

PubMed Abstract | Crossref Full Text | Google Scholar

29. Xia, J, Harmon, JL, Connolly, KG, Donnelly, RM, Anderson, MR, and Howard, HA. Who publishes in “predatory” journals? J Assoc Inf Sci Technol. (2015) 66:1406–17. doi: 10.1002/ASI.23265

Crossref Full Text | Google Scholar

30. Abdul Baki, MN, and Alhaj Hussein, M. The impact of article processing charge waiver on conducting research in low-income countries. Confl Heal. (2021) 15:1–2. doi: 10.1186/S13031-021-00413-1

Crossref Full Text | Google Scholar

31. Research4Life. Home. (2024) Available online at: https://www.research4life.org/ (Accessed 12 April 2024).

Google Scholar

32. El Bairi, K, Fourtassi, M, El Fatimy, R, and El Kadmiri, N. Distance education as a tool to improve researchers’ knowledge on predatory journals in countries with limited resources: the Moroccan experience. Int J Educ Integr. (2023) 19:1–15. doi: 10.1007/S40979-023-00122-7/TABLES/3

Crossref Full Text | Google Scholar

33. European Parliament Artificial intelligence in healthcare (2022).

Google Scholar

35. Guo, X, Dong, L, and Hao, D. RETRACTED: Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Front Cell Dev Biol. (2024) 11:1339390. doi: 10.3389/FCELL.2023.1339390

Crossref Full Text | Google Scholar

36. Singh, A. A comparison study on AI language detector. In: 2023 IEEE 13th Annual Computing and Communication Workshop and Conference, CCWC 2023 (2023): 489–493.

Google Scholar

37. Cohen, MS. Hydroxychloroquine for the prevention of Covid-19—searching for evidence. N Engl J Med. (2020) 383:585–6. doi: 10.1056/NEJME2020388/SUPPL_FILE/NEJME2020388_DISCLOSURES.PDF

Crossref Full Text | Google Scholar

38. Michalska-Smith, MJ, and Allesina, S. And, not or: quality, quantity in scientific publishing. PLoS One. (2017) 12:e0178074. doi: 10.1371/JOURNAL.PONE.0178074

PubMed Abstract | Crossref Full Text | Google Scholar

39. Ioannidis, JPA. The mass production of redundant, misleading, and conflicted systematic reviews and Meta-analyses. Milbank Q. (2016) 94:485–514. doi: 10.1111/1468-0009.12210

PubMed Abstract | Crossref Full Text | Google Scholar

40. Sun, L, Yin, C, Xu, Q, and Zhao, W. Artificial intelligence for healthcare and medical education: a systematic review. Am J Transl Res. (2023) 15:4820–4828

Google Scholar

Keywords: artificial intelligence in health, scientific integrity, AI-generated publications, health professionals’ education, ethical guidelines in research

Citation: Arzilli G, Di Maggio E, De Angelis L, Baglivo F, Savoia E, Privitera GP and Rizzo C (2025) A surge of AI-driven publications: the impact on health professionals and potential mitigating solutions. Front. Public Health. 13:1680630. doi: 10.3389/fpubh.2025.1680630

Received: 06 August 2025; Accepted: 09 September 2025;
Published: 24 September 2025.

Edited by:

Kathleen W. Guan, Delft University of Technology, Netherlands

Reviewed by:

Ryan Varghese, Saint Joseph's University, United States

Copyright © 2025 Arzilli, Di Maggio, De Angelis, Baglivo, Savoia, Privitera and Rizzo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Elisa Di Maggio, ZWxpc2EuZGltYWdnaW9AdW5pZmcuaXQ=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.