AUTHOR=Niu Shurong , Zhang Lili , Wang Lina , Zhang Xue , Liu Erniao TITLE=Hybrid feature fusion in cervical cancer cytology: a novel dual-module approach framework for lesion detection and classification using radiomics, deep learning, and reproducibility JOURNAL=Frontiers in Oncology VOLUME=Volume 15 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2025.1595980 DOI=10.3389/fonc.2025.1595980 ISSN=2234-943X ABSTRACT=ObjectiveCervical cancer screening through cytology remains the gold standard for early detection, but manual analysis is time-consuming, labor-intensive, and prone to inter-observer variability. This study proposes an automated deep learning-based framework that integrates lesion detection, feature extraction, and classification to enhance the accuracy and efficiency of cytological diagnosis.Materials and methodsA dataset of 4,236 cervical cytology samples was collected from six medical centers, with lesion annotations categorized into six diagnostic classes (NILM, ASC-US, ASC-H, LSIL, HSIL, SCC). Four deep learning models, Swin Transformer, YOLOv11, Faster R-CNN, and DETR (DEtection TRansformer), were employed for lesion detection, and their performance was compared using mAP, IoU, precision, recall, and F1-score. From detected lesion regions, radiomics features (n=71) and deep learning features (n=1,792) extracted from EfficientNet were analyzed. Dimensionality reduction techniques (PCA, LASSO, ANOVA, MI, t-SNE) were applied to optimize feature selection before classification using XGBoost, Random Forest, CatBoost, TabNet, and TabTransformer. Additionally, an end-to-end classification model using EfficientNet was evaluated. The framework was validated using internal cross-validation and external testing on APCData (3,619 samples).ResultsThe Swin Transformer achieved the highest lesion detection accuracy (mAP: 0.94 external), outperforming YOLOv11, Faster R-CNN, and DETR. Combining radiomics and deep features with TabTransformer yielded superior classification (test accuracy: 94.6%, AUC: 95.9%, recall: 94.1%), exceeding both single-modality and end-to-end models. Ablation studies confirmed the importance of both the detection module and hybrid feature fusion. External validation demonstrated high generalizability (accuracy: 92.8%, AUC: 95.1%). Comprehensive statistical analyses, including bootstrapped confidence intervals and Delong’s test, further substantiated the robustness and reliability of the proposed framework.ConclusionsThe proposed AI-driven cytology analysis framework offers superior lesion detection, feature fusion-based classification, and robust generalizability, providing a scalable solution for automated cervical cancer screening. Future efforts should focus on explainable AI (XAI), real-time deployment, and larger-scale validation to facilitate clinical integration.