Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Radiol.

Sec. Interventional Radiology

Volume 5 - 2025 | doi: 10.3389/fradi.2025.1682725

This article is part of the Research TopicAdvances in Venous InterventionsView all 5 articles

Comparison of Artificial Intelligence Models and Physicians in Patient Education for Varicocele Embolization: A Double-Blind Randomized Controlled Trial

Provisionally accepted
Ozgur  GencOzgur Genc1*Omer  Naci TabakciOmer Naci Tabakci2
  • 1Department of Radiology, Istanbul Aydin Universitesi VM Medical Park Florya Hastanesi, Istanbul, Türkiye
  • 2TC Saglik Bakanligi Kocaeli Sehir Hastanesi, Izmit, Türkiye

The final, formatted version of the article will be published soon.

Background: Large language models (LLMs) appear to be capable of performing a variety of tasks, including answering questions, but there are few studies evaluating them in direct comparison with clinicians. This study aims to compare the performance of artificial intelligence (AI) models and clinical specialists in informing patients about varicocele embolization. Additionally, we aim to establish an evidence base for future hybrid informational systems that integrate both AI and clinical expertise. Methods: In this prospective, double-blind, randomized controlled trial, 25 frequently asked questions about varicocele embolization (collected via Google Search trends, patient forums, and 2 clinical experience) were answered by three AI models (ChatGPT-4o, Gemini Pro, and Microsoft Copilot) and one interventional radiologist. Responses were randomized and evaluated by two independent interventional radiologists using a valid 5-point Likert scale for academic accuracy and empathy. Results: Gemini achieved the highest mean scores for both academic accuracy (4.09 ± 0.50, 95% CI: 3.95-4.23) and higher expert-rated scores for empathetic communication (3.54 ± 0.59, 95% CI: 3.38-3.70), followed by Copilot (academic: 4.07 ± 0.46, 95% CI: 3.94-4.20; empathy: 3.48 ± 0.53, 95% CI: 3.33-3.63), ChatGPT (academic: 3.83 ± 0.58, 95% CI: 3.67-3.99; empathy: 2.92 ± 0.78, 95% CI: 2.70-3.14), and the comparator physician (academic: 3.75 ± 0.41, 95% CI: 3.64-3.86; empathy: 3.12 ± 0.82, 95% CI: 2.89-3.35). ANOVA revealed statistically significant differences across groups for both academic accuracy (F = 6.181, p < 0.001, η² = 0.086) and empathy (F = 9.106, p < 0.001, η² = 0.122). Effect sizes were medium for academic accuracy and large for empathy. Conclusions: AI models, particularly Gemini, received higher ratings from expert evaluators compared to the comparator physician in patient education regarding varicocele embolization, excelling in both academic accuracy and empathetic communication style. These preliminary findings suggest that AI models hold significant potential to complement patient education systems in interventional radiology practice and provide compelling evidence for the development of hybrid patient education models.

Keywords: artificial intelligence, Patient Education, Varicocele embolization, interventional radiology, Empathy, academic accuracy, Large language models

Received: 09 Aug 2025; Accepted: 30 Sep 2025.

Copyright: © 2025 Genc and Tabakci. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ozgur Genc, drozgurgenc@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.