Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Med.

Sec. Healthcare Professions Education

This article is part of the Research TopicNavigating the Digital Transformation of Healthcare Learning through Generative AIView all articles

Performance of o1 pro and GPT-4 in self-assessment questions for nephrology board renewal

Provisionally accepted
Ryunosuke  NodaRyunosuke Noda*Chiaki  YuasaChiaki YuasaFumiya  KitanoFumiya KitanoDaisuke  IchikawaDaisuke IchikawaYugo  ShibagakiYugo Shibagaki
  • St. Marianna University School of Medicine, Kawasaki, Japan

The final, formatted version of the article will be published soon.

Background: Large language models (LLMs) are increasingly evaluated in medical education and clinical decision support, but their performance in highly specialized fields, such as nephrology, is not well established. We compared two advanced LLMs, GPT-4 and the newly released o1 pro, on comprehensive nephrology board renewal examinations. Methods: We administered 209 Japanese Self-Assessment Questions for Nephrology Board Renewal from 2014–2023 to o1 pro and GPT-4 using ChatGPT pro. Each question, including images, was presented in separate chat sessions to prevent contextual carryover. Questions were classified by taxonomy (recall/interpretation/problem-solving), question type (general/clinical), image inclusion, and nephrology subspecialty. We calculated the proportion of correct answers and compared performances using chi-square or Fisher's exact tests. Results: Overall, o1 pro scored 81.3% (170/209), significantly higher than GPT-4's 51.2% (107/209; p<0.001). o1 pro exceeded the 60% passing criterion every year, while GPT-4 achieved this in only two out of the ten years. Across taxonomy levels, question types, and the presence of images, o1 pro consistently outperformed GPT-4 (p<0.05 for multiple comparisons). Performance differences were also significant in several nephrology subspecialties, such as chronic kidney disease, confirming o1 pro's broad superiority. Conclusion: o1 pro significantly outperformed GPT-4 in a comprehensive nephrology board renewal examination, demonstrating advanced reasoning and integration of specialized knowledge. These findings highlight the potential of next-generation LLMs as valuable tools in nephrology, warranting further and careful validation.

Keywords: Large language models, ChatGPT, GPT-4, O1, o1 pro, Nephrology

Received: 10 Sep 2025; Accepted: 30 Oct 2025.

Copyright: © 2025 Noda, Yuasa, Kitano, Ichikawa and Shibagaki. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ryunosuke Noda, nodaryu00@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.