BRIEF RESEARCH REPORT article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 8 - 2025 | doi: 10.3389/frai.2025.1662203
This article is part of the Research TopicGenAI in Healthcare: Technologies, Applications and EvaluationView all 7 articles
Using ChatGPT as an Assessment Tool for Medical Residents in Mexico: A Descriptive Experience
Provisionally accepted- 1Hospital General de Zona No 89, Guadalajara, Mexico
- 2Hospital General de Zona 14, Hermosillo, Mexico, Hermosillo, Mexico
- 3Hospital General de Zona 14, Guadalajara, Jalisco, Guadalajara, Mexico
- 4Departamento de Medicina y Ciencias de la Salud, Universidad de Sonora, Hermosillo, Mexico, Hermosillo, Mexico
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Introduction: Artificial intelligence (AI) in medical education has progressed gradually, with numerous authors debating whether to prohibit, restrict, or adopt its use in academic contexts. Growing evidence exists regarding the capabilities and applications of AI in this field, particularly in supporting educational tasks such as student assessment. In this article we described our experience using ChatGPT to evaluate medical residents. Materials and Methods: A descriptive cross-sectional study was conducted involving 35 medical residents from different specialty's at a secondary-level hospital. Two different exams were generated using ChatGPT in topics of Rocky Mountain Spotted Fever (RMSF) and Pertussis. Additionally, an opinion survey-previously validated was administered to assess participants' perceptions of ChatGPT ability to generate multiple-choice questions. Results: Overall average score for the Pertussis examination was 8.46, while the average for the RMSF examination was 8.29. All participants reported that the examination was well written and that the language used was coherent; 34 residents (97.14%) stated that the language was clear, concise, and easy to understand; 9 residents (25.71%) agreed that the language used was confusing; 33 residents (94.28%) rated the exams questions as difficult; 32 residents (91.42%) felt that they had adequately prepared for both examinations. Discussion: ChatGPT exhibits a promising faculty as a tool to support teaching activities in the training of medical specialists, mainly in reducing the human workload of healthcare personnel, and becoming integral to the next phase of medical education through AI-assisted content creation supervised by educators.
Keywords: ChatGPT, artificial intelligence, Medical Education, Resident physicians, Multiple choice question exams
Received: 08 Jul 2025; Accepted: 28 Aug 2025.
Copyright: © 2025 Rivera-Rosas, Calleja-López, Larios-Camacho and Trujillo-López. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Cristian Noe Rivera-Rosas, Hospital General de Zona No 89, Guadalajara, Mexico
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.