ORIGINAL RESEARCH article
Front. For. Glob. Change
Sec. Fire and Forests
Semi-supervised segmentation of forest fires from UAV remote sensing images via panoramic feature fusion and pixel contrastive learning
Provisionally accepted- 1Nanjing University of Posts and Telecommunications, Nanjing, China
- 2College of Information Science and Technology, Nanjing Forestry University, Nanjing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Unmanned aerial vehicle (UAV) remote sensing has become an important tool for forest fire monitoring due to its high spatial resolution and flexible data acquisition. However, existing forest fire segmentation methods still face significant limitations in terms of accuracy and robustness. These limitations are mainly caused by the complex imaging environment, unclear fire boundaries, diverse fire shapes, and limited availability of high-quality annotated data. To address these, we propose a semi-supervised segmentation method for UAV remote sensing forest fire, named PPCNet, which combines panoramic feature fusion and pixel-level contrastive learning. Specifically, a panoramic feature fusion (PFF) module is designed to dynamically integrate multi-scale and multi-level features, which enhances the joint expression of global structure and local details and alleviates the problem of insufficient boundary representation under complex backgrounds. To further improve the segmentation of fire edges and texture details, a dual-frequency feature enhancement (DFFE) module is constructed, which effectively combines high-frequency and low-frequency information to strengthen boundary and detail features. In addition, a pixel contrastive loss (PCL) is introduced based on pseudo-labels and feature contrast constraints, which reduces the misclassification and feature degradation in the unsupervised branch and significantly improves segmentation accuracy and stability under limited labeled data. Experimental results on four typical forest fire datasets demonstrate that the proposed PPCNet (Panoramic Feature Fusion and Pixel Contrastive Learning Network) achieves better segmentation performance than existing mainstream methods. This method provides an effective technical solution for intelligent forest fire segmentation based on UAV remote sensing and shows good application potential.
Keywords: UAV remote sensing, fire, forest fire segmentation, Semi-Supervised Learning, Featurefusion
Received: 20 Jul 2025; Accepted: 31 Oct 2025.
Copyright: © 2025 Ma and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Haifeng Lin, haifeng.lin@njfu.edu.cn
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
