METHODS article
Front. Oncol.
Sec. Cancer Imaging and Image-directed Interventions
Volume 15 - 2025 | doi: 10.3389/fonc.2025.1624111
This article is part of the Research TopicAdvanced Machine Learning Techniques in Cancer Prognosis and ScreeningView all 6 articles
Leveraging Deep Learning for Early Detection of Cervical Cancer and Dysplasia in China Using U-NET++ and RepVGG Networks
Provisionally accepted- 1The Second Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- 2The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- 3Soochow University, Suzhou, China
- 4Monash University, Melbourne, Australia
- 5The University of Sydney, Sydney, Australia
- 6the First Affiliated Hospital of Bengbu Medical University, Bengbu, China
- 7Nantong University Affiliated Jiangyin Hospital, Nantong, China
- 8Zhejiang University School of Medicine, Zhejiang, China
- 9University of Oxford, Oxford, United Kingdom
- 10Southern Medical University, Guangzhou, China
- 11Zhejiang Wanli University, Zhejiang, China
- 12Affiliated Hospital of Jiangnan University, Wuxi, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Background: Cervical cancer is a significant global public health issue, primarily caused by persistent high-risk human papillomavirus (HPV) infections. The disease burden is disproportionately higher in low- and middle-income regions, such as rural China, where limited access to screening and vaccinations leads to increased incidence and mortality rates. Cervical cancer is preventable and treatable when detected early; this study utilizes deep learning to enhance early detection by improving the diagnostic accuracy of colposcopic image analysis. Objective: The aim of this study is to leverage deep learning techniques to improve the early detection of cervical cancer through the enhancement of colposcopic image diagnostic accuracy. Methods: The study sourced a comprehensive dataset of colposcopic images from The First Affiliated Hospital of Bengbu Medical University, with each image manually annotated by expert clinicians. The U-NET++ architecture was employed for precise image segmentation, converting colposcopic images into binary representations for detailed analysis. The RepVGG framework was then applied for classification, focusing on detecting cervical cancer, HPV infections, and cervical intraepithelial neoplasia (CIN). From a dataset of 848 subjects, 424 high-quality images were selected for training, with the remaining 424 used for validation. Results: The deep learning model effectively identified the disease severity in colposcopic images, achieving a predictive accuracy of 83.01%. Among the 424 validation subjects, cervical pathology was correctly identified in 352, demonstrating high diagnostic precision. The model excelled in detecting early-stage lesions, including CIN I and CIN II, which are crucial for initiating timely interventions. This capability positions the model as a valuable tool for reducing cervical cancer incidence and improving patient outcomes. Conclusion: The integration of deep learning into colposcopic image analysis marks a significant advancement in early cervical cancer detection. The study suggests that AI-driven diagnostic tools can significantly improve screening accuracy. Reducing reliance on human interpretation minimizes variability and enhances efficiency. In rural and underserved areas, the deployment of AI-based solutions could be transformative, potentially reducing cervical cancer incidence and mortality. With further refinement, these models could be adapted for broader population screening, aiding global efforts to eliminate cervical cancer as a public health threat.
Keywords: cervical cancer screening, Public health strategy, deep learning, Colposcopy, early diagnosis, resource-limited settings
Received: 08 May 2025; Accepted: 13 Aug 2025.
Copyright: © 2025 Li, Chen, Sun, Wang, Ma, Xu, Wang, Rong, Hu, Wei, Lu, Bai, Liu, Luo, Xu, Liu, Ye and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Lin Zhang, Monash University, Melbourne, Australia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.