SYSTEMATIC REVIEW article
Front. Res. Metr. Anal.
Sec. Emerging Technologies and Transformative Paradigms in Research
Volume 10 - 2025 | doi: 10.3389/frma.2025.1684137
Can AI Assess Literature Like Experts? An Entropy-Based Comparison of ChatGPT-4o, DeepSeek R1, and Human Ratings
Provisionally accepted- Nanjing Sport Institute, Nanjing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Background: Manual quality assessment of systematic reviews is labor-intensive, time-consuming, and subject to reviewer bias. With recent advances in large language models (LLMs), it is important to evaluate their reliability and efficiency as potential replacements for human reviewers. Aim: This study assessed whether generative AI models can substitute for manual reviewers in literature quality assessment by examining rating consistency, time efficiency, and discriminatory performance across four established appraisal tools. Methods: Ninety-one systematic reviews were evaluated using AMSTAR 2, CASP, PEDro, and RoB 2 by both human reviewers and two LLMs (ChatGPT-4.0 and DeepSeek R1). Entropy-based indicators quantified rating consistency, while Spearman correlations, receiver operating characteristic (ROC) analysis, and processing-time comparisons were used to assess the relationship between time variability and scoring reliability. Results: The two LLMs demonstrated high consistency with human ratings (mean entropy = 0.42), with particularly strong alignment for PEDro (0.17) and CASP (0.25). Average processing time per article was markedly shorter for LLMs (33.09 seconds) compared with human reviewers (1,582.50 seconds), representing a 47.80-fold increase in efficiency. Spearman correlation analysis showed a statistically significant positive association between processing-time variability and rating entropy (ρ=0.24, p=0.026), indicating that greater time variability was associated with lower consistency. ROC analysis further showed that processing-time variability moderately predicted moderate-to-low consistency (AUC = 0.65, p=0.045), with 46.00 seconds identified as the optimal cutoff threshold. Conclusion: LLMs markedly reduce appraisal time while maintaining acceptable rating consistency in literature quality assessment. Although human validation is recommended for cases with high processing-time variability (>46.00 seconds), generative AI represents a promising approach for standardized, efficient, and scalable quality appraisal in evidence synthesis.
Keywords: Artificial intelligence (AI), expert assessment, Literature evaluation, ChatGPT-4o, DeepSeek R1, Entropy-based method, Machine and Human Comparison
Received: 14 Aug 2025; Accepted: 23 Oct 2025.
Copyright: © 2025 Zhou and HU. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Haixu HU, hhx100000@163.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.