BRIEF RESEARCH REPORT article
Front. Med.
Sec. Healthcare Professions Education
Benchmarking Large Language Models for Medical Education: Performance on the Clinical Laboratory Technician Qualification Examination
Yaqing Ruan 1
Yue Jiang 2
Wen Jin 2
Weinan Lin 2
Yijun Xu 2
Jiangda Wang 2
XiuQing Wang 2
Zhaoxi Fang 2
1. Shaoxing University Affiliated Hospital, Shaoxing, China
2. Shaoxing University, Shaoxing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract
Large language models (LLMs) have shown growing applications in medicine, yet their capabilities in the field of clinical laboratory technology remain underexplored. This study aims to evaluate the performance of LLMs in the Chinese Clinical Laboratory Technologist Qualification Examination (CCLTQE) and provide empirical evidence for their application in laboratory medicine. A dataset containing 1,600 single-choice questions is constructed for the CCLTQE exam. The dataset covers four sections: clinical laboratory fundamentals, other medical knowledge related to clinical laboratory technology, clinical laboratory specialized knowledge, and clinical laboratory professional practice competence. We select 12 LLMs for evaluation, including the DeepSeek, GPT, Llama, Qwen, and Gemma series. Results show that Qwen3-235B achieves the highest overall accuracy (89.93%), followed by DeepSeek-R1 (89.75%) and QwQ-32B (89.22%). This study demonstrates that LLMs optimized for Chinese language and domain-specific content demonstrate outstanding performance in CCLTQE, indicating significant potential for AI-assisted education and practice in laboratory medicine.
Summary
Keywords
Clinical Laboratory Technologist QualificationExamination, deepseek, laboratory medicine, Large language models, Model evaluation
Received
28 November 2025
Accepted
22 January 2026
Copyright
© 2026 Ruan, Jiang, Jin, Lin, Xu, Wang, Wang and Fang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Zhaoxi Fang
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.