AUTHOR=Wang Ran , Lyu Chengqi , Yu Lvfeng TITLE=A transformation uncertainty and multi-scale contrastive learning-based semi-supervised segmentation method for oral cavity-derived cancer JOURNAL=Frontiers in Oncology VOLUME=Volume 15 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2025.1577198 DOI=10.3389/fonc.2025.1577198 ISSN=2234-943X ABSTRACT=ObjectivesOral cavity-derived cancer pathological images (OPI) are crucial for diagnosing oral squamous cell carcinoma (OSCC), but existing deep learning methods for OPI segmentation rely heavily on large, accurately labeled datasets, which are labor- and resource-intensive to obtain. This paper presents a semi-supervised segmentation method for OPI to mitigate the limitations of scarce labeled data by leveraging both labeled and unlabeled data.Materials and methodsWe use the Hematoxylin and Eosin (H&E)-stained oral cavity-derived cancer dataset (OCDC), which consists of 451 images with tumor regions annotated and verified by pathologists. Our method combines transformation uncertainty and multi-scale contrastive learning. The transformation uncertainty estimation evaluates the model’s confidence on data transformed via different methods, reducing discrepancies between the teacher and student models. Multi-scale contrastive learning enhances class similarity and separability while reducing teacher-student model similarity, encouraging diverse feature representations. Additionally, a boundary-aware enhanced U-Net is proposed to capture boundary information and improve segmentation accuracy.ResultsExperimental results on the OCDC dataset demonstrate that our method outperforms both fully supervised and existing semi-supervised approaches, achieving superior segmentation performance.ConclusionsOur semi-supervised method, integrating transformation uncertainty, multi-scale contrastive learning, and a boundary-aware enhanced U-Net, effectively addresses data scarcity and improves segmentation accuracy. This approach reduces the dependency on large labeled datasets, promoting the application of AI in OSCC detection and improving the efficiency and accuracy of clinical diagnoses for OSCC.