ORIGINAL RESEARCH article
Front. Oncol.
Sec. Radiation Oncology
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1557064
A recent evaluation on the performance of LLMs on radiation oncology physics using questions of randomly shuffled options
Provisionally accepted- 1Department of Radiation Oncology, Mayo Clinic Arizona, Scottsdale, Arizona, United States
- 2School of Computing, University of Georgia, Athens, Georgia, United States
- 3Department of Radiology, Mayo Clinic, Rochester, Minnesota, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Purpose: We present an updated study evaluating the performance of large language models (LLMs) in answering radiation oncology physics questions, focusing on the recently released models.Methods: A set of 100 multiple-choice radiation oncology physics questions, previously created by a well-experienced physicist, was used for this study. The answer options of the questions were randomly shuffled to create "new" exam sets. Five LLMs - OpenAI o1-preview, GPT-4o, LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet -with the versions released before September 30, 2024, were queried using these new exam sets. To evaluate their deductive reasoning ability, the correct answer options in the questions were replaced with "None of the above." Then, the explain-first and step-by-step instruction prompts were used to test if this strategy improved their reasoning ability. The performance of the LLMs was compared with the answers from medical physicists.Results: All models demonstrated expert-level performance on these questions, with o1-preview even surpassing medical physicists with a majority vote. When replacing the correct answer options with 'None of the above', all models exhibited a considerable decline in performance, suggesting room for improvement. The explain-first and step-by-step instruction prompts helped enhance the reasoning ability of the LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet models.Conclusion: These recently released LLMs demonstrated expert-level performance in answering radiation oncology physics questions.
Keywords: Radiation Oncology, large language model (LLM), Physics, Evaluation, Augumentation
Received: 08 Jan 2025; Accepted: 29 Apr 2025.
Copyright: © 2025 Wang, Holmes, Liu, Chen, Liu, Shen and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Wei Liu, Department of Radiation Oncology, Mayo Clinic Arizona, Scottsdale, AZ 85259, Arizona, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.