ORIGINAL RESEARCH article
Front. Comput. Sci.
Sec. Computer Vision
Volume 7 - 2025 | doi: 10.3389/fcomp.2025.1613648
Improving Remote Sensing Scene Classification with Data Augmentation Techniques to Mitigate Class Imbalance
Provisionally accepted- 1Qingdao Huanghai University, Qingdao, China
- 2Shandong University of Science and Technology, Qingdao, Shandong Province, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The high-resolution remote sensing scene images can provide abundant information about ground objects. However, classical methods often fail to achieve satisfactory results for complex urban scene classification. Deep learning-based remote sensing image scene classification (RSSC) represents an important approach for understanding semantic information. Traditional methods are unable to meet the requirements of high-accuracy RSSC and are challenged by issues such as limited labeled samples and class imbalance, which may lead to classification bias in classifiers. This paper explores the feasibility of mitigating classification bias by reducing the imbalance ratio (IR) of the dataset. First, a class-imbalanced dataset was constructed using Very High Resolution (VHR) images, labeled into nine land use/land cover (LULC) categories. Second, comprehensive data augment methods (mirroring, rotation, cropping, HSV perturbation, and gamma transformation) were applied, successfully reducing the dataset's IR from 9.38 to 1.28. Subsequently, four architectures, MobileNet-v2, ResNet101, ResNeXt101_32×32d, and Transformer, were trained and evaluated on both class-balanced and classimbalanced datasets. The results show that the classification bias caused by class-imbalance was alleviated, with overall classifier performance substantially improved. Specifically, for the most severely underrepresented category (intersections), Precision and Recall improvements reached up to 128% and 102%, respectively, respectively, narrowing the gap with other categories and reducing classification bias. Furthermore, the average Kappa and overall accuracy (OA) increased by 11.84% and 12.97%, respectively, with reduced standard deviations in recall and precision demonstrating enhanced model stability. predefined semantic classes based on scene-specific information, representing high-level abstractions of scene content. However, traditional classification methods are limited by their reliance on low-level feature analysis, which restricts their capacity to extract high-level semantic information, thereby often failing to meet the accuracy demands of RSSC.Currently, the proliferation of deep learning has spurred numerous methodologies for remote sensing scene image classification, which can be broadly categorized into three types: autoencoder-based, Convolutional Neural Network (CNN)-based, and Generative Adversarial Network (GAN)-based approaches (
Keywords: Remote sensing image scene classification, Class-imbalance, deep learning, Finetune, data augment
Received: 18 Apr 2025; Accepted: 18 Sep 2025.
Copyright: © 2025 Wang, Zhao, Chen and Zhan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Lili Zhan, skd992016@sdust.edu.cn
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.