ORIGINAL RESEARCH article
Front. Comput. Sci.
Sec. Computer Vision
Volume 7 - 2025 | doi: 10.3389/fcomp.2025.1576958
This article is part of the Research TopicFoundation Models for Healthcare: Innovations in Generative AI, Computer Vision, Language Models, and Multimodal SystemsView all 11 articles
Deep Learning for Vision Screening in Resource-Limited Settings: Development of Multi-Branch CNN for Refractive Error Detection Based on Smartphone Image
Provisionally accepted- 1Faculty of Medicine, Andalas University, Padang, Indonesia
- 2Faculty of Public Health, University of Indonesia, Depok, West Java, Indonesia
- 3Strathclyde Business School, Glasgow, Scotland, United Kingdom
- 4Faculty of Medicine, University of Indonesia, Jakarta, Papua, Indonesia
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Globally, uncorrected refractive errors are the leading cause of preventable vision impairment, disproportionately affecting individuals in low-resource regions. Despite the availability of economical treatments, challenges in timely diagnosis and screening access persist, particularly in underserved communities. This study introduces a novel deep learning-based system for automated refractive error classification using photorefractive images acquired via a standard smartphone camera. A multi-branch convolutional neural network (CNN) was developed and trained on a dataset of 2,139 corneal images from an Indonesian sample, enabling the model to classify refractive errors into four categories: significant myopia, significant hypermetropia, and insignificant refractive error, and not applicable to classified. The 3-branch CNN architecture demonstrated superior performance, achieving an overall test accuracy of 91%, precision of 96%, and recall of 98%, with an area under the curve (AUC) score of 0.9896. Its multi-scale feature extraction pathways were pivotal in addressing overlapping red reflex patterns and subtle variations between classes. The dataset, collected from a public eye hospital in Indonesia, reflects real-world diversity and strengthens the model's reliability for deployment in resource-limited settings. Grad-CAM visualization provided insights into the model's interpretability, enhancing its clinical applicability. This study establishes the feasibility of smartphone-based photorefractive assessment integrated with artificial intelligence for scalable and cost-effective vision screening. This system offers a reliable solution for early refractive error detection by training the CNN model with data representative of Southeast Asian populations, with significant implications for improving accessibility to eye care services.
Keywords: Refractive Error Detection, Vision Screening, artificial intelligence, Convolutional Neural Network, smartphone, Red reflex, Photorefraction
Received: 14 Feb 2025; Accepted: 14 Jul 2025.
Copyright: © 2025 Syauqie, Patria, Hastono, Siregar and Moeloek. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Muhammad Syauqie, Faculty of Medicine, Andalas University, Padang, Indonesia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.