PERSPECTIVE article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Volume 8 - 2025 | doi: 10.3389/frai.2025.1644098
This article is part of the Research TopicAI4Science: New Paradigms and TrendsView all articles
AI for Scientific Integrity: Detecting Ethical Breaches, Errors, and Misconduct in Manuscripts
Provisionally accepted- 1University of Saskatchewan, Saskatoon, Canada
- 2Graduate School for Media and Givernance, Vaccine and Infectious Disease Organization, International Vaccine Centre (VIDO-InterVac), Saskatoon, Kanagawa, Canada
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The use of Generative AI (GenAI) in scientific writing has grown rapidly, offering tools for manuscript drafting, literature summarization, and data analysis. However, these benefits are accompanied by risks, including undisclosed AI authorship, manipulated content, and the emergence of papermills. This perspective examines two key strategies for maintaining research integrity in the GenAI era: (1) detecting unethical or inappropriate use of GenAI in scientific manuscripts and (2) using AI tools to identify mistakes in scientific literature, such as statistical errors, image manipulation, and incorrect citations. We reviewed the capabilities and limitations of existing AI detectors designed to differentiate human-written (HWT) from machine-generated text (MGT), highlighting performance gaps, genre sensitivity, and vulnerability to adversarial attacks. We also investigate emerging AI-powered systems aimed at identifying errors in published research, including tools for statistical verification, citation validation, and image manipulation detection. Additionally, we discuss recent publishing industry initiatives to AI-driven papermills.Our investigation shows that these developments are not yet sufficiently accurate or reliable yet for use in academic assessment, they mark an early but promising steps toward scalable, AIassisted quality control in scholarly publishing.
Keywords: artificial intelligence, Generative AI, research integrity, research ethics, Responsible research, AI Detection
Received: 09 Jun 2025; Accepted: 15 Aug 2025.
Copyright: © 2025 Pellegrina and Helmy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mohamed Helmy, Graduate School for Media and Givernance, Vaccine and Infectious Disease Organization, International Vaccine Centre (VIDO-InterVac), Saskatoon, 252-0882, Kanagawa, Canada
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.