AUTHOR=Zhao Weilong , Lai Honghao , Pan Bei , Huang Jiajie , Xia Danni , Bai Chunyang , Liu Jiayi , Liu Jianing , Jin Yinghui , Shang Hongcai , Liu Jianping , Shi Nannan , Liu Jie , Chen Yaolong , Estill Janne , Ge Long TITLE=Assessing the adherence of large language models to clinical practice guidelines in Chinese medicine: a content analysis JOURNAL=Frontiers in Pharmacology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2025.1649041 DOI=10.3389/fphar.2025.1649041 ISSN=1663-9812 ABSTRACT=ObjectiveWhether large language models (LLMs) can effectively facilitate CM knowledge acquisition remains uncertain. This study aims to assess the adherence of LLMs to Clinical Practice Guidelines (CPGs) in CM.MethodsThis cross-sectional study randomly selected ten CPGs in CM and constructed 150 questions across three categories: medication based on differential diagnosis (MDD), specific prescription consultation (SPC), and CM theory analysis (CTA). Eight LLMs (GPT-4o, Claude-3.5 Sonnet, Moonshot-v1, ChatGLM-4, DeepSeek-v3, DeepSeek-r1, Claude-4 sonnet, and Claude-4 sonnet thinking) were evaluated using both English and Chinese queries. The main evaluation metrics included accuracy, readability, and use of safety disclaimers.ResultsOverall, DeepSeek-v3 and DeepSeek-r1 demonstrated superior performance in both English (median 5.00, interquartile range (IQR) 4.00–5.00 vs. median 5.00, IQR 3.70–5.00) and Chinese (both median 5.00, IQR 4.30–5.00), significantly outperforming all other models. All models achieved significantly higher accuracy in Chinese versus English responses (all p < 0.05). Significant variations in accuracy were observed across the categories of questions, with MDD and SPC questions presenting more challenges than CTA questions. English responses had lower readability (mean flesch reading ease score 32.7) compared to Chinese responses. Moonshot-v1 provided the highest rate of safety disclaimers (98.7% English, 100% Chinese).ConclusionLLMs showed varying degrees of potential for acquiring CM knowledge. The performance of DeepSeek-v3 and DeepSeek-r1 is satisfactory. Optimizing LLMs to become effective tools for disseminating CM information is an important direction for future development.