ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 8 - 2025 | doi: 10.3389/frai.2025.1642361
This article is part of the Research TopicDigital Medicine and Artificial IntelligenceView all 14 articles
Quantitative evaluation of meibomian gland dysfunction via deep learning-based infrared image segmentation
Provisionally accepted- 1Beijing Institute of Technology, Zhuhai, Zhuhai, China
- 2Beijing Institute of Technology, Beijing, China
- 3Zhuhai Institute of Advanced Technology Chinese Academy of Sciences, Zhuhai, China
- 4Beijing Normal-Hong Kong Baptist University, Zhuhai, China
- 5Zhuhai City People's Hospital, Zhuhai, China
- 6The Chinese University of Hong Kong, Hong Kong, Hong Kong, SAR China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
In recent years, numerous advanced image segmentation algorithms have been employed in the analysis of meibomian glands (MG). However, their clinical utility remains limited due to insufficient integration with the diagnostic and grading processes of meibomian gland dysfunction (MGD). To bridge this gap, the present study leverages three state-of-the-art deep learning models— DeepLabV3+, U-Net, and U-Net++—to segment infrared MG images and extract quantitative features for MGD diagnosis and severity assessment. A comprehensive set of morphological (e.g., gland area, width, length, and distortion) and distributional (e.g., gland density, count, inter-gland distance, disorder degree, and loss ratio) indicators were derived from the segmentation outcomes. Spearman correlation analysis revealed significant positive associations between most indicators and MGD severity (correlation coefficients ranging from 0.26 to 0.58; P < 0.001), indicating their potential diagnostic value. Furthermore, Box plot analysis highlighted clear distribution differences in the majority of indicators across all grades, with medians shifting progressively, interquartile ranges widening, and an increase in outliers, reflecting morphological changes associated with disease progression. Logistic regression models trained on these quantitative features yielded area under the receiver operating characteristic curve (AUC) values of 0.89±0.02, 0.76±0.03, 0.85±0.02, and 0.94±0.01 for MGD grades 0, 1, 2, and 3, respectively. The models demonstrated strong classification performance, with micro-average and macro-average AUCs of 0.87±0.02 and 0.86±0.03, respectively. Model stability and generalizability were validated through 5-fold cross-validation. Collectively, these findings underscore the clinical relevance and robustness of deep learning-assisted quantitative analysis for the objective diagnosis and grading of MGD, offering a promising framework for automated medical image interpretation in ophthalmology.
Keywords: image segmentation1, Meibomian gland dysfunction2, Dry eye disease3, Meibograpy4, deep learning5
Received: 06 Jun 2025; Accepted: 07 Oct 2025.
Copyright: © 2025 Yu, Wei, Cui, Tan, Xu and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Zhijun Wei, 284528575@qq.com
Mini Han Wang, 1155187855@link.cuhk.edu.hk
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.