GENERAL COMMENTARY article
Front. Ophthalmol.
Sec. New Technologies in Ophthalmology
This article is part of the Research TopicArtificial Intelligence in Ophthalmology: Innovations and Clinical ImpactView all articles
Commentary: Synergistic AI-resident approach achieves superior diagnostic accuracy in tertiary ophthalmic care for glaucoma and retinal disease
Provisionally accepted- 1Universidad Latina de Panama, Panama City, Panama
- 2Mayo Clinic Florida, Jacksonville, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
approach achieves superior diagnostic accuracy in tertiary ophthalmic care for glaucoma and retinal disease. Front Ophthalmol. 2025. Camacho-García-Formentí et al. present an impressive demonstration of how a synergistic collaboration between artificial intelligence systems and resident physicians can improve diagnostic accuracy in glaucoma and retinal disease. Not only do they show that an AI-resident partnership can outperform either one alone, they also demonstrate something that many groups struggle to capture: how these systems behave in a real tertiary-care environment. Anyone who has worked in a busy ophthalmology service knows that elegant results on paper are one thing, and making them coexist with high patient load, mixed pathology, and time pressure is another.Their team managed both.Even though the combined approach was the top performer, the study also shows that AI on its own outperformed first-year residents across several key measures, including higher accuracy in glaucoma suspect classification (88.6% vs. 82.9%) and much higher sensitivity for retinal disease (76% vs. 52%, and 100% for high-risk findings). AI's CDR estimates also tracked more closely with expert measurements than those of residents (r = 0.728 vs. 0.538), although the system could only evaluate CDR in 61.6% of patients because of image-quality issues. That detail matters: even when an algorithm performs well, real-world imaging is variable, and someone still needs to handle the cases it cannot process.As we read through their results, a central question emerged. We talk a lot about "human in the loop" AI, and it's usually framed as a reassuring idea. The algorithm assists, but the clinician remains in control. That logic works today, when clinicians have years of pattern recognition behind them. But what happens when the human entering the loop has had fewer chances to build the very skills the loop depends on?Ophthalmology is built on repetition. Residents grow by seeing normal variants, borderline OCTs, unusual discs, cases that fooled everyone for a moment, and even the occasional false alarm. These encounters are not random; they form the "texture" of training. And with an incorporated system of AI reaching 76% sensitivity overall and up to 100% for high-risk findings, the educational risk may shift toward reduced cognitive effort. The stakes of being wrong feel lower, and it becomes easier for trainees to lean on the model's output. That subtle shift is enough to reshape how clinical judgment develops.That picture shifts even more if these systems begin to be used simultaneously or as a first-pass screening tool, in favor of the higher accuracy demonstrated by a synergic approach. Current discussions about glaucoma care already consider AI-based screening and triage as likely components of routine workflows (Galvez-Sánchez et al., 2024; Myślicka et al., 2024). For example, if AI becomes very reliable at identifying early glaucoma or prioritizing retinal findings and begins filtering or labeling before a trainee even looks at an image, residents may start encountering a narrower slice of disease, mostly the ambiguous or the highly complex. That sounds ideal at first, but exposure to the full spectrum is what builds confidence in calling something "normal" or "stable," which is just as important as diagnosing pathology. And when AI makes the right call 90% of the time, trainees might, without meaning to, start deferring to the algorithm instead of forming their own mental map first.Of course, this future is not inevitable. There is room for deliberate design. AI systems could be built with "teaching modes" that intentionally route uncertain or instructive cases to trainees before any automatic labeling. They could generate sets of high-yield comparisons, or highlight regions of low model confidence so residents learn where humans still outperform machines.Early work in explainable AI and recent discussions on supervised integration suggest that these kinds of interactions could become a genuine educational asset (Heinke et al., 2024).But none of that will happen if training is not part of the AI conversation from the beginning.The article by Camacho-García-Formentí et al. shows how well AI can support clinical care right now. What we hope to add is that implementation planning should also consider how residents will grow inside these new systems. The "human in the loop" model only works if the human entering the loop is well prepared, and that preparation depends on protecting opportunities for independent clinical reasoning.Their study opens the door to improving accuracy and workflow efficiency. The next step is making sure the same progress strengthens, rather than narrows, the education of future ophthalmologists.
Keywords: artificial intelligence, Clinical training, Diagnostic workflow, Medical Education, Ophthalmology
Received: 19 Nov 2025; Accepted: 08 Dec 2025.
Copyright: © 2025 Williams-Gaona and Corro. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Rosa Corro
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
