Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Physiol.

Sec. Computational Physiology and Medicine

Volume 16 - 2025 | doi: 10.3389/fphys.2025.1644589

LESS-Net: A Lightweight Network for Epistaxis Image Segmentation Using Similarity-Based Contrastive Learning

Provisionally accepted
Mengzhen  LaiMengzhen Lai1Junyang  ChenJunyang Chen2Yutong  HuangYutong Huang1Xianyao  WangXianyao Wang1Nanbo  XuNanbo Xu1Shengxiang  ZhouShengxiang Zhou1Xiangsen  ZhuXiangsen Zhu1Yunhan  WuYunhan Wu1Bing  YangBing Yang1Guanyu  ChenGuanyu Chen1Jun  LiJun Li1,3,4*
  • 1College of Information Engineering, Sichuan Agricultural University, Ya'an, China
  • 2SOUTHEAST UNIVERSITY SCHOOL OF COMPUTER SCIENCE AND ENGINEERING, Nanjing, China
  • 3Sichuan Agricultural University Key Laboratory of Agricultural Information Engineering of Sichuan Province, Ya'an, China
  • 4Ya'an Digital Agricultural Engineering Technology Research Center, Ya'an, China

The final, formatted version of the article will be published soon.

ABSTRACT Introduction: Accurate automated segmentation of epistaxis (nosebleeds) from endoscopic images is critical for clinical diagnosis but is significantly hampered by the scarcity of annotated data and the inherent difficulty of precise lesion delineation. These challenges are particularly pronounced in resource-constrained healthcare environments, creating a pressing need for data-efficient deep learning solutions. Methods: To address these limitations, we developed LESS-Net, a lightweight, semi-supervised segmentation framework. LESS-Net is designed to effectively leverage unlabeled data through a novel combination of consistency regularization and contrastive learning, which mitigates data distribution mismatches and class imbalance. The architecture incorporates an efficient MobileViT backbone and introduces a multi-scale feature fusion module to enhance segmentation accuracy beyond what is achievable with traditional skip-connections. Results: Evaluated on a public Nasal Bleeding dataset, LESS-Net significantly outperformed seven state-of-the-art models. With only 50% of the data labeled, our model achieved a mean Intersection over Union (mIoU) of 82.51%, a Dice coefficient of 75.62%, and a mean Recall of 92.12%, while concurrently reducing model parameters by 73.8%. Notably, this semi-supervised performance surpassed that of all competitor models trained with 100% labeled data. The framework's robustness was further validated at extremely low label ratios of 25% and 5%. Conclusion: Ablation studies confirmed the distinct contribution of each architectural component to the model's overall efficacy. LESS-Net provides a powerful and data-efficient framework for medical image segmentation. Its demonstrated ability to achieve superior performance with limited supervision highlights its substantial potential to enhance AI-driven diagnostic capabilities and improve patient care in real-world clinical workflows, especially in underserved settings.

Keywords: Contrastive learning, Consistency regularization, Epistaxis, image segmentation, Semi-Supervised Learning

Received: 11 Jun 2025; Accepted: 19 Sep 2025.

Copyright: © 2025 Lai, Chen, Huang, Wang, Xu, Zhou, Zhu, Wu, Yang, Chen and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Jun Li, lijun@sicau.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.