- 1College of Computer Science and Technology, Inner Mongolia Minzu University, Tongliao, China
- 2College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang, China
- 3College of Engineering, Inner Mongolia Minzu University, Tongliao, China
- 4Innovation Center for Intelligent Forage Equipment, Inner Mongolia Minzu University, Tongliao, China
- 5College of Grassland Science, Inner Mongolia Minzu University, Tongliao, China
Introduction: Monitoring grazing intensity is crucial for maintaining ecological balance and promoting the sustainable management of sandy grasslands. Traditional ground surveys and single-source remote sensing often lack the spatial resolution, spectral richness, and robustness required to accurately characterize heterogeneous grazing impacts. Unmanned aerial vehicle (UAV)-based multi-source remote sensing provides fine-scale, repeatable observations that can overcome the limitations of traditional field surveys.
Methods: Grazing experiments were conducted in the sandy grasslands of Inner Mongolia, China, using UAVs to capture visible and multispectral imagery across plots subjected to different grazing intensities. Spectral responses were analyzed using mean–variance statistics and Tukey’s multiple comparison tests. A series of novel spectral indices were constructed based on separability analysis and integrated with traditional vegetation indices to address the limited sensitivity of conventional indices and multi-index feature redundancy. An automatic incremental feature selection (AIFS) algorithm was developed to adaptively optimize the feature subset and enhance model robustness, with a support vector machine classifier, k-nearest neighbor, and random forest used for grazing intensity recognition.
Results: Distinct spectral responses to grazing disturbance were observed: visible bands increased with grazing intensity due to enhanced soil background effects, while red-edge and near-infrared bands effectively captured reductions in chlorophyll content and canopy structure under moderate to severe grazing. Traditional vegetation indices were sensitive to extreme grazing, whereas the proposed indices showed superior performance in distinguishing moderate grazing levels. The AIFS-optimized feature subset reduced redundancy and improved model accuracy, achieving the highest recognition performance (OA=92.13%, Kappa=88.99%)—outperforming models using all features or single-source data.
Discussion: Integrating UAV visible and multispectral imagery with intelligent feature selection enhances the detection of grazing-induced vegetation responses. This approach provides a robust framework for high-precision grassland monitoring and sustainable ecological management in arid and semi-arid regions.
1 Introduction
Grasslands are globally significant ecosystems, providing essential resources for livestock production and playing a vital role in maintaining ecological balance, sequestering carbon, and conserving water (Zhao et al., 2020). China has extensive grasslands, with the sandy grasslands of Inner Mongolia serving as a typical example, which function in both ecological protection and grazing production. However, long-term overgrazing and other human disturbances have caused severe degradation of sandy grasslands, leading to vegetation loss, wind erosion, and biodiversity decline (Lin et al., 2022; Jiang et al., 2024). Grazing intensity is a key indicator of grassland exploitation and degradation, and its accurate monitoring is vital for the ecological protection and sustainable use of sandy grasslands. Therefore, there is an urgent need to develop efficient and precise monitoring methods to support scientific management and policy formulation (Reinermann et al., 2020; Ali and Kaul, 2025).
Currently, monitoring grazing intensity on grasslands mainly relies on traditional field surveys and sampling plots. These methods remain direct and reliable, but are time-consuming, labor-intensive, and limited in spatial coverage, making dynamic monitoring challenging (Wang et al., 2022; Yu et al., 2025). Satellite remote sensing has been widely used for large-scale grassland monitoring; however, its spatial and temporal resolution often cannot meet the requirements for small-scale, detailed detection of grazing intensity in sandy grasslands (Ali et al., 2016; Wang et al., 2023). In contrast, unmanned aerial vehicle (UAV) remote sensing offers advantages such as flexibility, high resolution, and timely data acquisition, making it particularly suitable for detailed studies of sandy grassland surface characteristics and grazing disturbances (Lyu et al., 2022; Bazzo et al., 2023; Lyu et al., 2024). UAV-based studies typically acquire visible, multispectral, or hyperspectral imagery. Previous studies have demonstrated that hyperspectral data can capture subtle spectral variations, leading to improved accuracy in vegetation classification, degradation detection, and biomass estimation. However, its high equipment costs and complex data processing limit its large-scale application (Fernández-Habas et al., 2022; Wengert et al., 2022; Liu et al., 2023; Zhu et al., 2023; Zhang et al., 2024a). Therefore, cost-effective visible and multispectral data have become increasingly crucial for monitoring sandy grasslands (Rossi et al., 2019).
In recent years, visible-light imagery has been utilized to estimate grassland coverage and degradation levels due to its ease of acquisition and simplicity of processing (Possoch et al., 2016; Lussem et al., 2017, 2018; Zhang et al., 2022). Multispectral imagery, due to its rich spectral information, has been widely applied for estimating grass biomass and monitoring grazing disturbances (Orlando et al., 2022; Schucknecht et al., 2022; Gao et al., 2024; Zwick et al., 2024). However, monitoring based on a single data source often suffers from limited robustness across different grazing conditions, low generalizability, and inconsistent performance in distinguishing moderate grazing levels—one of the most critical yet difficult categories to detect. To address these limitations, multi-source data fusion has gradually emerged as a popular research focus (Barber et al., 2021; Liu et al., 2025). For example, some researchers extract key indicators, such as vegetation indices, and perform multi-indicator analyses to identify Zoysia japonica, thereby evaluating its cold tolerance (Ku et al., 2023). Other studies have developed multiple linear regression (MLR) and generalized additive models (GAM) to estimate grassland aboveground biomass (AGB) using texture features from UAV RGB imagery and vegetation indices from multispectral imagery (Pan et al., 2024). Researchers collected UAV RGB and multispectral imagery, combined vegetation structure variables with spectral features, and applied machine learning algorithms for image segmentation and species classification (Pranga et al., 2024). However, despite progress in multi-source fusion, existing studies still exhibit three major gaps: (1) fusion is often simple (e.g., direct concatenation of indices or original bands) and lacks targeted design based on spectral separability; (2) redundant or irrelevant features in multi-source datasets reduce model stability and generalizability; (3) limited attention has been given to constructing spectral indices specifically sensitive to grazing-induced vegetation responses, especially at moderate grazing intensities.
To address these challenges, this study focuses on detecting grazing intensity in a typical sandy grassland. By integrating visible and multispectral UAV data, the study examines the correlations and separability of spectral features. We construct targeted spectral indices (SIs) based on the physical mechanisms of vegetation response to grazing and fuse them with traditional vegetation indices. Concurrently, an automatic incremental feature selection (AIFS) method is applied to select multi-source features, reducing redundancy and enhancing adaptability to heterogeneous grazing conditions. Ultimately, by integrating and optimizing multiple-source SIs, the study achieves efficient and precise detection of grazing intensity in sandy grasslands, providing a novel approach for ecological monitoring and grassland management.
2 Materials and methods
2.1 Overview of the experiment site
The study was conducted at Manghatu Village, Gerchaolu Sumu, Zhalute Banner, Tongliao City, Inner Mongolia, China (44°62′24″N, 120°45′14″E) at an altitude of 482 m. The region features a temperate semi-arid continental climate with distinct seasons: dry, windy springs; hot summers with scarce rainfall; cool autumns with significant diurnal temperature variations; and long, cold winters. The climatic parameters described in this section were obtained from the China Meteorological Data Service Center and represent long-term averages based on a 30-year period (1995–2024). The long-term mean temperature is 5.5 °C, and the annual accumulated temperature ranges from 2400 to 2600 °C. Annual sunshine averages approximately 3000 hours, with a frost-free period of 115–130 days. Annual rainfall ranges from 300 to 400 mm, with the majority occurring between June and August, accounting for ~70% of the total precipitation. Annual evaporation exceeds 1800 mm, with an average relative humidity of 49%. The grassland is classified as a temperate mountain grassland, characterized by sandy soil with a pH of 7.6. The dominant species is Agropyron cristatum, with key associated species including Lespedeza davurica, Cleistogenes squarrosa, and Carex duriuscula.
2.2 Experimental design
This study investigated the effects of varying grazing intensities on grassland ecosystems, commencing in June 2025. Grazing occurred from 15 June to 15 September, during which the sheep remained in their assigned subplots without interruption. The experimental site covered a total area of 3 ha, divided into four subplots of 0.75 ha each. A randomized block design was used to establish four grazing stocking rate treatments. Each subplot was grazed by 0, 3, 6, or 9 six-month-old rams, corresponding to 0, 4, 8, or 12 sheep units per ha, respectively. These treatments corresponded to no grazing (NG), light grazing (LG), moderate grazing (MG), and severe grazing (SG) (Zhang et al., 2024b). The overview of the experimental site and the distribution of the grazing intensity treatments are shown in Figure 1.
Figure 1. Location of the experimental sites and distribution of areas under different grazing intensities. (a) China, (b) Inner Mongolia, and (c) the study site. (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
2.3 Data acquisition
2.3.1 Ground investigation
Grassland technical personnel conducted ground surveys and established sample plots within experimental areas subjected to varying levels of grazing intensity. Sample plots were manually established for four grazing intensity levels—no grazing, light grazing, moderate grazing, and severe grazing—to identify herbaceous species influenced by grazing. Precise identification and extraction of regions of interest are allowed in the UAV imagery for subsequent analysis.
2.3.2 Image collection and region of interest extraction
This study used a Mavic 3M UAV (SZ DJI Technology Co., Ltd., Shenzhen, China) to acquire imagery (Yu et al., 2023). The UAV is equipped with a visible light camera and a four-channel multispectral camera. The visible light camera features a 4/3-inch CMOS sensor with 20 million effective pixels. The multispectral camera captures four bands—green (560 ± 16 nm), red (650 ± 16 nm), red edge (730 ± 16 nm), and near-infrared (860 ± 26 nm)—with a resolution of 2592 × 1944 pixels per band. The UAV integrates a Global Navigation Satellite System (GNSS) with a Real-Time Kinematic (RTK) module, enabling centimeter-level positioning accuracy.
Imagery was acquired on 19 August 2025, between 11:00 and 12:00, at a flight altitude of 30 m. Both forward and side overlaps were set at 80% to ensure high-quality stitching and subsequent processing. A total of 1,536 visible and 3,155 multispectral images were acquired during the flight. Images were stitched using DJTerra software V3.0 (SZ DJI Technology Co., Ltd., Shenzhen, China), producing visible light orthoimages at ~0.87 cm/pixel and multispectral images at ~1.54 cm/pixel. RGB imagery corresponding to different grazing intensity areas is shown in Figure 2. After UAV image stitching, the red, green, and blue channels from the visible imagery, along with the green, red, red-edge, and near-infrared reflectance from the multispectral imagery, were extracted for the marked ground areas using ENVI 5.6 software (Harris Geospatial, Boulder, CO, USA)(Fenghua Yu et al., 2024). To ensure representativeness, region of interest (ROI) samples were selected using a stratified random sampling strategy within each grazing treatment subplot. A total of 720 samples were extracted, with 180 samples assigned to each of the four grazing intensity levels (NG, LG, MG, SG), ensuring balanced class distribution and consistent spatial coverage across treatments.
Figure 2. Visible-light UAV images illustrating different grazing intensities. (a) No grazing, (b) light grazing, (c) moderate grazing, and (d) severe grazing.
2.4 Grazing intensity monitoring method based on UAV multi-source remote sensing
Figure 3 illustrates the framework for monitoring grazing intensity using multi-source UAV imagery. The process mainly involves four steps: (1) data acquisition and preprocessing, (2) construction and fusion of multi-source SIs, (3) AIFS, and (4) classification modeling and accuracy evaluation. Data pre-processing includes normalization, standardization, and outlier removal. Multi-source SIs are derived from multiple features, integrating conventional and self-constructed indices to capture complementary information across data sources. An AIFS method, which combines feature importance assessment with redundancy removal, is employed to select the optimal combination of indices gradually. Finally, the performance of the optimized fused features is evaluated using various classification algorithms to achieve high accuracy and robust monitoring.
2.4.1 Data preprocessing
To ensure balanced contributions from visible and multispectral data in subsequent multi-source feature fusion, the raw data were first preprocessed. The pixel values of the red (), green (), and blue () bands in the visible imagery were normalized as follows (Cao et al., 2021):
The reflectance values of the green (), red (), red edge (), and near-infrared () bands in multispectral imagery did not require further normalization. This preprocessing lays the foundation for subsequent construction and fusion of multi-source SIs, preventing feature bias caused by differences in numerical scales.
2.4.2 Multi-source SIs construction
SIs were calculated from combinations of different spectral bands to enhance target features, improve stability, and mitigate interference from factors such as illumination (Guan et al., 2022). Common approaches for constructing SIs include ratio-based, difference-based, and interaction-based combinations. In this study, 14 self-constructed indices were developed by integrating visible and multispectral band information, optimizing inter-band complementarity, and enhancing feature recognition. Both conventional and self-constructed SIs are summarized in Table 1 (Varela et al., 2021) (Zhang et al., 2021b) (Tilly et al., 2015) (Bendig et al., 2015) (Liedtke et al., 2020) (Zhou et al., 2019) (Yue et al., 2019) (Pamungkas, 2023) (Zhou et al., 2019) (Pamungkas, 2023).
2.4.3 Automatic incremental feature selection
To further enhance the discriminative power of the integrated features, this study employed an AIFS method on the feature matrix , which combines visible indices, multispectral indices, and self-constructed indices.
2.4.3.1 Feature importance ranking
Feature importance was evaluated using the Random Forest (RF) algorithm (Belgiu and Drăguţ, 2016). Let the feature fusion matrix be denoted as , where n is the number of samples and m is the total number of features, and let the class vector be . The RF model builds multiple decision trees and calculates the incremental prediction error on out-of-bag (OOB) samples for each tree to obtain the importance score for each feature:
Here, T is the total number of decision trees, and represents the increase in OOB error caused by randomly permuting the values of feature j in the t-th tree. Features were then ranked in descending order based on , resulting in the sorted feature matrix .
2.4.3.2 Incremental feature selection for combinations
To minimize the impact of redundant features while maintaining classification discriminative power, an incremental combination strategy was used to select features gradually. The procedure is as follows:
1) Candidate features were sequentially added from the sorted feature matrix.
2) The Pearson correlation coefficient between the candidate feature and the selected feature set S was calculated (Guan et al., 2023):
Suppose the absolute Pearson correlation between the candidate feature and any feature in the selected set S exceeds the threshold =0.8. In that case, the candidate feature is considered highly correlated with existing features and is excluded. Otherwise, it is added to the selected feature set.
3) For each newly added feature, overall accuracy (OA) was calculated using 5-fold cross-validation:
Here, denotes the error rate for the i-th fold, with . Newly added features were retained in the final subset only if they improved the OA. This strategy ensured that the selected features minimized redundancy while maximizing classification performance.
2.4.3.3 Output the final feature subset
After iterative incremental selection, the optimal feature subset, , is obtained. This subset serves as input for subsequent grazing intensity classification, enabling high-precision and robust classification performance.
Algorithm 1 summarizes the AIFS procedure. The algorithm ranks features using RF importance, incrementally evaluates candidate features based on Pearson correlation and cross-validated accuracy, and retains only those that improve classification while minimizing redundancy.
2.4.4 Classification methods
This study used three widely applied classification algorithms—K-nearest neighbor (KNN) (Abdullah et al., 2001), support vector machine (SVM) (Guan et al., 2025), and Random Forest (RF) (Guo et al., 2024)—to classify grazing intensity based on the fused features.
2.4.4.1 K-nearest neighbor
KNN is a non-parametric classification method based on the similarity of samples. For a sample , the algorithm calculates distances to all training samples, selects the K nearest neighbors, and assigns to the majority class of these neighbors. The procedure is expressed as:
Here, denotes the set of K nearest neighbors of , represents the distance weight, and is the indicator function.
2.4.4.2 Support vector machine
SVM is a maximum-margin method for binary or multi-class classification, separating samples by finding the optimal hyperplane that maximizes the margin between classes. In this study, a radial basis function (RBF) kernel was used for non-linear mapping. Kernel scale and box constraint parameters were optimized via grid search. The SVM classifier is expressed as:
Here, denotes the RBF kernel; is the Lagrange multiplier; and b is the bias term.
2.4.4.3 Random forest
Random Forest (RF) is an ensemble learning method that classifies data by constructing multiple decision trees. Each tree is trained using bootstrap sampling, which randomly selects a subset of features at each node, thereby reducing overfitting and improving stability. Final classification is determined by majority voting:
Here, is the prediction of the t-th decision tree, and T is the total number of trees.
2.4.4.4 Classification test and parameter setting
To ensure balanced training and test sets, the dataset was divided using the Sample set Partitioning based on joint X–Y distances (SPXY) method, with 70% allocated to training and 30% to testing (Wu et al., 1996). Grid search optimization was performed for all classifiers. For KNN, the number of neighbors K was set to [3, 5, 7, 9] with distance metrics including Euclidean, Manhattan, and cosine. For SVM, the RBF kernel scale was set to [0.1, 0.5, 1, 2], and the box constraint parameter C was set to [0.1, 1, 10]. For RF, the number of trees was set to [100, 200, 300], with minimum leaf nodes [1, 5, 10].
2.4.5 Evaluation indicators
2.4.5.1 Separability evaluation
To assess the discriminative capability of spectral features under different grazing intensities, a qualitative analysis was first performed by comparing means and variances. Quantitative assessment was conducted using one-way analysis of variance (ANOVA) followed by Tukey’s multiple comparison test (Stoline, 1981). To quantify the classification ability of SIs, the M statistic was introduced, defined as: Here, and denote the sample means of the two categories, and and represent the corresponding within-class variances. A higher M value indicates a greater difference in feature distribution between the two categories, reflecting stronger separability (Smith et al., 2007).
2.4.5.2 Evaluation of classification accuracy
After fitting the classifier to the training set, its performance was evaluated on the test set using overall accuracy (OA) and the Kappa coefficient ( (Guan et al., 2022). Overall OA was calculated as:
Here, k represents the total number of categories, and represents the number of correctly classified samples.
The Kappa coefficient () measures the agreement between the observed classification accuracy and that expected by random chance, calculated as:
Here,
Here, denotes the observed classification accuracy, and represents the expected accuracy under random classification.
3 Results
3.1 Detection of grazing intensity by spectral features
The spectral mean and variance curves under different grazing intensities are presented in Figure 4. The grey values of the three visible-light features generally increased with grazing intensity, but decreased under severe grazing. In the multispectral bands, green and red reflectance increased with grazing intensity, whereas the red-edge and near-infrared bands showed decreasing trends. However, under severe grazing, both the red-edge and near-infrared bands exhibited an increasing trend.
Figure 4. Mean and variance curves under different grazing intensities. (a) Visible-light features, and (b) multispectral features. (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
ANOVA results for visible and multispectral features revealed significant differences (p < 0.05) across grazing intensities, indicating that grazing intensity has a substantial impact on spectral responses. Tukey’s post-hoc test (Figure 5) revealed that the red () and blue () components significantly distinguished between low, medium, and severe grazing intensities, with the most substantial differences observed under extreme grazing conditions. The green () component demonstrated a limited ability to separate low and medium grazing intensities, but retained significant differentiation under severe grazing conditions. For multispectral bands, the green () and red () bands showed substantial differences between low and severe grazing intensities. The red-edge () and near-infrared () bands were sensitive not only to the transition between low and severe grazing, but also to the transition between medium and severe grazing, highlighting their strong ability to capture variations in vegetation status and grazing conditions. Overall, multispectral features offered more comprehensive discrimination of grazing levels than visible-light features. Specifically, the red-edge and near-infrared bands effectively captured vegetation changes under medium-to-severe grazing, whereas visible-light features were more sensitive to extreme grazing.
Figure 5. Tukey’s multiple comparison results for original spectral features across grazing intensities. (a) , (b) , (c) , (d) , (e) , (f) , and (g) . Significant differences among grazing groups are indicated (p < 0.05). (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
In this study, the classification performance of grazing intensity was evaluated using KNN, SVM, and RF based on three feature sets: visible light (RGB), multispectral (Multi), and multi-source fusion (RGB+Multi). The results are presented in Figure 6. Among single-source features, RGB achieved the highest accuracy with KNN, yielding an OA of 78.77% and a Kappa of 70.86%. The Multi-features performed best with RF, achieving an OA of 75.94% and a Kappa of 66.74%. In contrast, the multi-source fusion features markedly improved classification, achieving OA values of 83.49%, 86.79%, and 81.13%, with corresponding Kappa values of 76.62%, 81.28%, and 73.25%. These results demonstrate that integrating visible and multispectral features leverages complementary spectral information to improve discrimination among grazing intensity levels. Overall, the combination of multi-source fusion features with the SVM classifier yielded the highest accuracy and the most consistent results.
Figure 6. Classification performance of three feature sets (RGB, Multi, and RGB+Multi) using KNN, SVM, and RF classifiers. (a) OA, and (b) Kappa coefficient.
Figure 7 presents pixel-level classification results under different combinations of original features and classifiers. As shown in Figure 7a, the experimental area was divided into four grazing intensity levels: no grazing (NG), light grazing (LG), moderate grazing (MG), and severe grazing (SG). Figures 7b–d illustrate the classification maps generated by different feature combinations and classifiers. The classification map based on visible (RGB) features using the KNN classifier (Figure 7b) roughly captures the grazing intensity distribution but shows some confusion between LG and MG regions. When using multispectral (Multi) features with the KNN classifier (Figure 7c), the classification accuracy improves slightly, particularly in distinguishing NG and SG areas. However, misclassifications remain at moderate levels. In contrast, the multi-source fusion (RGB+Multi) combined with the SVM classifier (Figure 7d) produces the most accurate and spatially consistent classification, with clear boundaries between different grazing intensities that closely align with the actual distribution. This demonstrates that multi-source feature fusion effectively enhances spatial discrimination of grazing intensity and mitigates spectral confusion caused by soil–vegetation mixing in sandy grasslands.
Figure 7. Pixel-level classification maps under different combinations of original features and classifiers. (a) Reference grazing intensity map; (b) RGB features with KNN, (c) multispectral (Multi) features with KNN, and (d) multi-source fusion (RGB+Multi) features with SVM. (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
3.2 Detection of grazing intensity by spectral indices
The M-statistic was applied to assess the separability of the SIs, with results presented in Figure 8. For most indices, M values exceeded 1, indicating good separability among grazing intensity levels. Traditional visible indices (ExG, ExGR, RGBVI) and red–NIR indices (NDVI, GNDVI, SAVI, MSAVI, EVI2) generally exhibited high separability. Their maximum M values all exceeded 1, while the mean values were mainly within the range of 0.6 to 1.0. In contrast, NDRE, idx3, and several other indices showed poor separability, with maximum M values below 0.4. Notably, among the self-constructed indices, idx6, idx7, and idx14 performed best, with maximum M values exceeding 1.5. In particular, idx7 and idx14 reached extreme values above 2, demonstrating substantial advantages in capturing vegetation responses to grazing intensity and outperforming traditional indices.
Figure 9 presents the classification results of three classifiers (KNN, SVM, and RF) using different combinations of SIs. The classification accuracy of visible indices (RGB) was comparable to that of multispectral indices (Multi), with Multi showing slightly better overall performance. The results indicate that near-infrared and red-edge bands provide stronger discriminative capability for grazing level identification. Integrating RGB and Multi (RGB+Multi) markedly improved accuracy, with OA values of 84.91%, 88.68%, and 82.55% for KNN, SVM, and RF, respectively. Corresponding Kappa coefficients all exceeded 75%, indicating strong complementarity between visible and multispectral features. With the newly constructed indices (New-indices), classification performance further improved: all three classifiers achieved OA values above 87% and Kappa values above 82%. These results demonstrate that the newly proposed indices more effectively capture differences in grazing intensity and enhance class separability. Under the AllFusion condition (RGB+Multi+New-indices), results were comparable to those from RGB+Multi and New-indices. Overall, SVM showed the most consistent performance, with a maximum OA of 88.68% and a Kappa above 83%. Comparative analysis indicates that the newly constructed indices substantially improve discrimination of grazing intensity, outperforming single-band combinations, while multi-source feature fusion further enhances model robustness.
Figure 9. Classification performance for different combinations of spectral indices across three classifiers. (a) OA, and (b) Kappa coefficient.
Figure 10 presents pixel-level classification maps generated from different SI sets and classifiers. Figure 10a shows the reference grazing-intensity partition (NG, LG, MG, SG). Figure 10b-f display classification maps produced using different index sets and the best-performing classifier for each set. The map produced from RGB-derived indices with RF (Figure 10b) captures the broad spatial pattern but shows misclassification between adjacent LG and MG patches. Using multispectral indices with SVM (Figure 10c) improves discrimination of NG and SG zones and reduces local speckle. The fused RGB+Multi set under SVM (Figure 10d) increases spatial continuity and sharpens boundaries. Notably, the AllFusion map (Figure 10f), which uses the full index suite, attains the best overall spatial agreement with the reference—reflecting the benefit of combining complementary information across index families. The map based on the newly constructed indices (Figure 10e) also yields high spatial consistency and particularly good detection of moderate grazing, despite using a more compact feature set. Overall, AllFusion and the new-index set both perform strongly: AllFusion slightly surpasses other sets in overall spatial agreement, while the new indices offer nearly comparable accuracy with fewer features, demonstrating a favorable trade-off between completeness and parsimony.
Figure 10. Pixel-level classification maps derived from different spectral-index combinations and classifiers. (a) Reference grazing intensity map, (b) RGB-derived indices with RF, (c) Multi-derived indices with SVM, (d) RGB+Multi indices with SVM, (e) New indices set with SVM, and (f) AllFusion with SVM. (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
3.3 Detection of grazing intensity by feature selection
This study applied multiple feature selection methods for full-feature fusion, including random frog (RF) (Li et al., 2012), ReliefF (Robnik-Šikonja and Kononenko, 2003), least-squares mutual information (LSMI) (Suzuki et al., 2009), successive projections algorithm (SPA) (Araújo et al., 2001; Yu et al., 2020), elimination of uninformative variables (UVE) (Centner et al., 1996), competitive adaptive reweighted sampling (CARS) (Li et al., 2009), and AIFS. Each method used its own scoring or importance distribution to determine the selected features automatically. As shown in Table 2, the number and type of features selected by different methods varied slightly but showed overall consistency. Most methods consistently selected traditional vegetation indices (e.g., ExG, GNDVI, NDRE, RGBVI, MSAVI) and several self-constructed indices (e.g., idx7, idx8, idx9, idx14), suggesting that these features have strong discriminatory power for grazing intensity across different algorithms. Among these, idx7, idx9, and idx14 appeared most frequently across the six methods, highlighting their stability and central importance. In contrast, the RF and AIFS methods selected fewer features.
Table 3 shows that various feature selection methods exhibit distinct performances across different classifiers (KNN, SVM, RF). Overall, the AIFS method produced the best classification results. SVM achieved high accuracy, with OA and Kappa values of 92.13% and 88.99%, respectively, and also showed strong performance under KNN and RF. This result demonstrates that the method achieves optimal discrimination while requiring fewer features. In contrast, the AllFusion approach, which utilizes all features, yielded the lowest accuracy. The OA and Kappa values across all three classifiers were significantly lower than those of other methods, indicating that redundant features not only fail to improve performance but may also impair classification effectiveness. Overall, appropriate feature selection substantially improves the accuracy of grazing intensity classification, with the AIFS method providing both efficiency and precision.
The classification model developed in this study was applied to perform pixel-level classification on images generated by fusing visible and multispectral data. To reduce noise, a 3 × 3 filter was used to smooth the classification map (Guan et al., 2025), as shown in Figure 11. The predicted grazing intensities (blue, green, yellow, and red areas) show a clear correspondence with the ground-truth grazing intensity areas (dashed regions representing NG, LG, MG, and SG). Regions with actual grazing intensities of NG, LG, MG, and SG were predominantly predicted as blue, green, yellow, and red, respectively. These results indicate that the model can accurately predict varying grazing intensities, demonstrating robust overall performance. However, minor deviations exist between some local predictions and actual conditions, likely due to factors such as terrain undulations and the grazing behavior of sheep.
Figure 11. Pixel-level classification map of grazing intensity. (e.g., NG, No Grazing; LG, Light Grazing; MG, Moderate Grazing; SG, Severe Grazing).
4 Discussion
This study investigated the remote sensing-based monitoring of grazing intensity in sandy grasslands by integrating visible and multispectral data acquired from UAVs with spectral indices and an automatic incremental feature selection (AIFS) method. The results demonstrate that multi-source data fusion and feature optimization substantially improve the discrimination of grazing intensities, with the SVM classifier achieving the highest accuracy (OA=92.13%, Kappa=88.99%). These findings highlight that, in complex ecosystems such as sandy grasslands—characterized by sparse vegetation and strong soil background effects—the integration of UAV multi-source imagery with intelligent feature selection effectively addresses the limitations of traditional approaches and provides a robust methodological framework for grassland ecological monitoring.
4.1 Spectral response characteristics and ecological interpretation
The spectral mean curves revealed that visible bands generally increased with grazing intensity but declined under severe grazing conditions. This nonlinear pattern can be explained by the pronounced soil–vegetation hybrid effect typical of sandy grasslands. Light to moderate grazing increases soil exposure, enhancing reflectance in blue, green, and red bands due to the bright sandy background. Under heavy grazing, however, trampling induces surface crusting and litter accumulation, which reduce overall reflectance. In the multispectral region, the green and red bands similarly increased with grazing intensity, while the red-edge and near-infrared (NIR) bands decreased, reflecting reductions in chlorophyll content and canopy integrity. The slight rebound of red-edge and NIR reflectance under severe grazing likely arises from soil high-albedo effects and the persistence of grazing-tolerant species.
These results differ from the monotonic spectral decline patterns reported in humid or dense grasslands (Oliveira et al., 2019; Zhang et al., 2021a), illustrating that vegetation spectral responses in semi-arid grasslands are co-regulated by physiological degradation of plants and enhanced background reflectance from soil. This interactive mechanism underscores the need to account for both vegetation and soil components when interpreting spectral responses in dryland ecosystems (Shi et al., 2021).
4.2 Performance of spectral indices and feature selection
Statistical analyses (ANOVA and Tukey’s tests) confirmed that all spectral indices exhibited significant differences among grazing intensities. Traditional vegetation indices (NDVI, GNDVI) effectively captured the decline in vegetation cover and photosynthetic activity between light and heavy grazing but were less sensitive to moderate grazing levels. Indices that integrate red and NIR information (NDRE, MSAVI) showed superior discrimination between low–moderate and moderate–heavy grazing intensities, aligning with previous findings from semi-arid regions (Hernandez et al., 2024; Pádua et al., 2024; Pranga et al., 2024).
The self-developed indices (idx7, idx9, idx14) demonstrated even greater adaptability and stability, reflecting the advantages of region-specific spectral design tailored to sandy grasslands. Feature selection consistently identified ExG, NDRE, MSAVI, and the newly constructed indices (idx7, idx9, idx14) as key predictors, confirming their robustness for grazing intensity monitoring (Xu et al., 2022).
Interestingly, NDRE and idx3 exhibited relatively low M-statistic values (<0.4), suggesting weak separability when evaluated independently. However, both were included in the final feature subset derived through AIFS. This apparent discrepancy highlights the difference between univariate separability and multivariate synergy. While the M-statistic evaluates features in isolation, the AIFS approach optimizes combinations based on their joint contribution to classification accuracy. Thus, even indices with limited standalone performance can provide complementary information that enhances the discriminative power and stability of the final model. The inclusion of correlation constraints further ensures non-redundancy among selected features, reinforcing the unique role of each feature in the optimized subset.
In addition, compared with commonly used feature selection algorithms such as ReliefF and CARS, AIFS also shows practical advantages in computational efficiency. ReliefF requires repeated distance-based sampling across the entire feature space, which becomes increasingly expensive as feature dimensionality grows. CARS relies on iterative Monte-Carlo sampling and partial-least-squares regression, making its computational burden dependent on repeated model fitting. In contrast, AIFS performs a single-pass importance ranking using Random Forests followed by lightweight incremental evaluation with correlation filtering and cross-validation. Because redundant features are removed early during correlation screening, subsequent computations are substantially reduced. As a result, AIFS generally achieves competitive or higher classification accuracy while requiring fewer iterations and fewer full model evaluations than ReliefF and CARS, making it more suitable for UAV-based multi-source datasets where dozens of features must be processed efficiently.
4.3 Ecological and methodological implications
From an ecological perspective, the observed spectral responses reflect key plant physiological mechanisms under grazing disturbance (Xiang et al., 2025a, b). Moderate grazing stimulates compensatory growth and chlorophyll regeneration, leading to increased red-edge reflectance, whereas severe grazing reduces leaf area index (LAI) and internal scattering, lowering NIR reflectance. Concurrently, increased soil exposure elevates visible-band reflectance, generating a nonlinear response in the composite spectral signal. This “hybrid soil–vegetation effect” emphasizes that spectral variability in sandy grasslands arises from coupled changes in vegetation physiology, community composition, and soil optical properties.
Methodologically, this study demonstrates that multi-source UAV imagery combined with AIFS enhances classification performance while reducing redundancy, outperforming both full-feature and single-source approaches. The optimized feature set (idx9, idx8, idx7, NDVI, idx2, idx3, NDRE) achieved the highest classification accuracy and robustness, validating the efficiency of automated, data-driven feature selection in heterogeneous environments.
4.4 Limitations and future directions
Despite these advances, several limitations remain. The study was conducted on a controlled grazing trial with limited spatial coverage, which may not fully capture the spatial heterogeneity of sandy grasslands. In particular, the relatively small study area restricts the ability to test the generalizability of the proposed method across broader ecological gradients. The use of single-period UAV imagery restricted the analysis of seasonal dynamics. Additionally, some confusion persisted between light and moderate grazing levels, suggesting the need for features more sensitive to early degradation signals. Furthermore, although UAV-based data acquisition reduces many operational uncertainties, potential sources of error remain, including illumination variability, minor fluctuations in flight altitude, and atmospheric effects. In this study, flights were conducted under clear-sky conditions near solar noon to minimize illumination differences, and radiometric calibration panels along with the UAV’s RTK module were used to ensure consistent reflectance retrieval and high positioning accuracy. These measures help reduce uncertainty, but residual variability may still influence spectral responses.
Future studies should expand monitoring to diverse soil–vegetation types, extend the research area to larger and more heterogeneous sandy grassland regions, incorporate multi-temporal UAV datasets for dynamic assessments. Integrating medium- and high-resolution satellite data (e.g., Sentinel-2, GF-6) with UAV observations would also allow multi-scale validation and facilitate the application of the method to regional monitoring. Moreover, integrating structural information from LiDAR or radar could enhance discrimination under low vegetation cover. Further development of hybrid feature selection and deep learning approaches could also improve model robustness and generalizability.
5 Conclusions
This study utilized the fusion of visible and multispectral UAV data to systematically analyze spectral response features across different grazing intensities in sandy grasslands, which are characterized by low vegetation cover and substantial soil background interference. Integration of feature construction with an AIFS method improved the accuracy of grazing intensity classification. The main findings of this study are summarized as follows:
1. Visible and multispectral bands responded differently to grazing disturbance. Notably, the red edge and near-infrared bands were particularly effective in distinguishing between medium- and severe-grazing intensities. Nonlinear variations resulting from soil-vegetation interactions highlight the distinctive spectral characteristics of sandy grasslands.
2. Traditional vegetation indices (NDVI, GNDVI) were sensitive primarily to extreme grazing levels, whereas the proposed indices (idx7, idx9, idx14) outperformed them in distinguishing moderate grazing intensities, demonstrating strong regional adaptability.
3. The AIFS method efficiently reduced redundant information and enhanced model robustness, achieving the highest accuracy with the SVM classifier (OA=92.13%, Kappa=88.99%), surpassing the performance obtained using all features or single-source data.
The study demonstrates that multi-source UAV remote sensing, combined with intelligent feature selection, provides a promising approach for high-precision monitoring of grazing intensity in sandy grasslands, offering valuable insights for assessing grassland degradation and informing ecological management. However, the study is limited by the use of single-period data and small plot sizes. Future studies could incorporate multi-temporal and multi-scale observations, along with the integration of structural data, to further elucidate the dynamic processes and mechanisms underlying grazing disturbance. This study not only provides technical support for monitoring grazing intensity in sandy grasslands but also offers novel insights for ecological remote sensing research in low-vegetation-cover grasslands.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author/s.
Author contributions
QG: Conceptualization, Writing – original draft, Methodology, Investigation. MJ: Funding acquisition, Software, Visualization, Writing – review & editing. WD: Project administration, Writing – review & editing, Supervision. XC: Validation, Writing – original draft, Formal Analysis. BY: Data curation, Resources, Writing – original draft.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This study was supported by the Science and Technology Plan Project of Inner Mongolia Autonomous Region (2025SYFDZ0411), Inner Mongolia Natural Science Foundation Project (2025MS06053), Open Fund Project of the Innovation Centre for Intelligent Equipment for Pasture in Inner Mongolia Autonomous Region (MDK2025049), Basic Research Operating Expenses Project for Universities Directly Affiliated with the Inner Mongolia Autonomous Region (GXKY25Z016) and Inner Mongolia University for Nationalities Doctoral Research Start-up Fund (BSZ006).
Conflict of interest
The authors declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abdullah, M. Z., Guan, L. C., and Mohd Azemi, B. M. N. (2001). Stepwise discriminant analysis for colour grading of oil palm using machine vision system. Food Bioproducts Process. 79, 223–231. doi: 10.1205/096030801753252298
Ali, I., Cawkwell, F., Dwyer, E., Barrett, B., and Green, S. (2016). Satellite remote sensing of grasslands: from observation to management. J. Plant Ecol. 9, 649–671. doi: 10.1093/jpe/rtw005
Ali, A. and Kaul, H.-P. (2025). Monitoring yield and quality of forages and grassland in the view of precision agriculture applications—A review. Remote Sens. 17, 279. doi: 10.3390/rs17020279
Araújo, M. C. U., Saldanha, T. C. B., Galvão, R. K. H., Yoneyama, T., Chame, H. C., and Visani, V. (2001). The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. Chemometrics Intelligent Lab. Syst. 57, 65–73. doi: 10.1016/S0169-7439(01)00119-8
Barber, N., Alvarado, E., Kane, V. R., Mell, W. E., and Moskal, L. M. (2021). Estimating fuel moisture in grasslands using UAV-mounted infrared and visible light sensors. Sensors 21, 6350. doi: 10.3390/s21196350
Bazzo, C. O., Kamali, B., Hütt, C., Bareth, G., and Gaiser, T. (2023). A review of estimation methods for aboveground biomass in grasslands using UAV. Remote Sens. 15, 639. doi: 10.3390/rs15030639
Belgiu, M. and Drăguţ, L. (2016). Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogrammetry Remote Sens. 114, 24–31. doi: 10.1016/j.isprsjprs.2016.01.011
Bendig, J., Yu, K., Aasen, H., Bolten, A., Bennertz, S., Broscheit, J., et al. (2015). Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Observation Geoinformation 39, 79–87. doi: 10.1016/j.jag.2015.02.012
Cao, X., Liu, Y., Yu, R., Han, D., and Su, B. (2021). A comparison of UAV RGB and multispectral imaging in phenotyping for stay green of wheat population. Remote Sens. 13, 5173. doi: 10.3390/rs13245173
Centner, V., Massart, D.-L., de Noord, O. E., de Jong, S., Vandeginste, B. M., and Sterna, C. (1996). Elimination of uninformative variables for multivariate calibration. Analytical Chem. 68, 3851–3858. doi: 10.1021/ac960321m
Fernández-Habas, J., Carriere Cañada, M., García Moreno, A. M., Leal-Murillo, J. R., González-Dugo, M. P., Abellanas Oar, B., et al. (2022). Estimating pasture quality of Mediterranean grasslands using hyperspectral narrow bands from field spectroscopy by Random Forest and PLS regressions. Comput. Electron. Agric. 192, 106614. doi: 10.1016/j.compag.2021.106614
Gao, S.-h., Yan, Y.-z., Yuan, Y., Zhang, N., Ma, L., and Zhang, Q. (2024). Comprehensive degradation index for monitoring desert grassland using UAV multispectral imagery. Ecol. Indic. 165, 112194. doi: 10.1016/j.ecolind.2024.112194
Guan, Q., Qiao, S., Feng, S., and Du, W. (2025). Investigation of peanut leaf spot detection using superpixel unmixing technology for hyperspectral UAV images. Agriculture 15, 597. doi: 10.3390/agriculture15060597
Guan, Q., Song, K., Feng, S., Yu, F., and Xu, T. (2022). Detection of peanut leaf spot disease based on leaf-, plant-, and field-scale hyperspectral reflectance. Remote Sens. 14, 4988. doi: 10.3390/rs14194988
Guan, Q., Zhao, D., Feng, S., Xu, T., Wang, H., and Song, K. (2023). Hyperspectral technique for detection of peanut leaf spot disease based on improved PCA loading. Agronomy 13, 1153. doi: 10.3390/agronomy13041153
Guo, S., Feng, Z., Wang, P., Chang, J., Han, H., Li, H., et al. (2024). Mapping and classification of the liaohe estuary wetland based on the combination of object-oriented and temporal features. IEEE Access 12, 60496–60512. doi: 10.1109/ACCESS.2024.3389935
Hernandez, A., Jensen, K., Larson, S., Larsen, R., Rigby, C., Johnson, B., et al. (2024). Using unmanned aerial vehicles and multispectral sensors to model forage yield for grasses of semiarid landscapes. Grasses 3, 84–109. doi: 10.3390/grasses3020007
Jiang, K., Zhang, Q., Wang, Y., Li, H., Yang, Y., and Reyimu, T. (2024). Effects of grazing on the grassland ecosystem multifunctionality of montane meadow on the northern slope of the Tianshan Mountains, China. Environ. Earth Sci. 83, 70. doi: 10.1007/s12665-023-11292-5
Ku, K.-B., Mansoor, S., Han, G. D., Chung, Y. S., and Tuan, T. T. (2023). Identification of new cold tolerant Zoysia grass species using high-resolution RGB and multi-spectral imaging. Sci. Rep. 13, 13209. doi: 10.1038/s41598-023-40128-2
Li, H., Liang, Y., Xu, Q., and Cao, D. (2009). Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration. Analytica Chimica Acta 648, 77–84. doi: 10.1016/j.aca.2009.06.046
Li, H.-D., Xu, Q.-S., and Liang, Y.-Z. (2012). Random frog: An efficient reversible jump Markov Chain Monte Carlo-like approach for variable selection with applications to gene selection and disease classification. Analytica Chimica Acta 740, 20–26. doi: 10.1016/j.aca.2012.06.031
Liedtke, J. D., Hunt, C. H., George-Jaeggli, B., Laws, K., Watson, J., Potgieter, A. B., et al. (2020). High-throughput phenotyping of dynamic canopy traits associated with stay-green in grain sorghum. Plant Phenomics 2020, 4635153. doi: 10.34133/2020/4635153
Lin, X., Zhao, H., Zhang, S., Li, X., Gao, W., Ren, Z., et al. (2022). Effects of animal grazing on vegetation biomass and soil moisture on a typical steppe in Inner Mongolia, China. Ecohydrology 15, e2350. doi: 10.1002/eco.2350
Liu, X., Wang, H., Cao, Y., Yang, Y., Sun, X., Sun, K., et al. (2023). Comprehensive growth index monitoring of desert steppe grassland vegetation based on UAV hyperspectral. Front. Plant Sci. 13. doi: 10.3389/fpls.2022.1050999
Liu, B., Ye, H., Liao, X., Zhang, X., Mao, G., and Pan, T. (2025). UAV Data for herbaceous community’ aboveground biomass upscaling: a new perspective on LiDAR and multispectral information fusion. Int. J. Digital Earth 18, 2543563. doi: 10.1080/17538947.2025.2543563
Lussem, U., Bolten, A., Gnyp, M. L., Jasper, J., and Bareth, G. (2018). Evaluation of rgb-based vegetation indices from uav imagery to estimate forage yield in grassland. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-3, 1215–1219. doi: 10.5194/isprs-archives-XLII-3-1215-2018
Lussem, U., Hollberg, J., Menne, J., Schellberg, J., and Bareth, G. (2017). Using calibrated rgb imagery from low-cost uavs for grassland monitoring: case study at the rengen grassland experiment (rge), Germany. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W6, 229–233. doi: 10.5194/isprs-archives-XLII-2-W6-229-2017
Lyu, X., Li, X., Dang, D., Dou, H., Wang, K., and Lou, A. (2022). Unmanned aerial vehicle (UAV) remote sensing in grassland ecosystem monitoring: A systematic review. Remote Sens. 14, 1096. doi: 10.3390/rs14051096
Lyu, X., Li, X., Dang, D., Wang, K., Zhang, C., Cao, W., et al. (2024). Systematic review of remote sensing technology for grassland biodiversity monitoring: Current status and challenges. Global Ecol. Conserv. 54, e03196. doi: 10.1016/j.gecco.2024.e03196
Oliveira, R. A., Näsi, R., Niemeläinen, O., Nyholm, L., Alhonoja, K., Kaivosoja, J., et al. (2019). Assessment of rgb and hyperspectral uav remote sensing for grass quantity and quality estimation. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W13, 489–494. doi: 10.5194/isprs-archives-XLII-2-W13-489-2019
Orlando, S., Minacapilli, M., Sarno, M., Carrubba, A., and Motisi, A. (2022). A low-cost multispectral imaging system for the characterisation of soil and small vegetation properties using visible and near-infrared reflectance. Comput. Electron. Agric. 202, 107359. doi: 10.1016/j.compag.2022.107359
Pádua, L., Castro, J. P., Castro, J., Sousa, J. J., and Castro, M. (2024). Assessing the impact of clearing and grazing on fuel management in a mediterranean oak forest through unmanned aerial vehicle multispectral data. Drones 8, 364. doi: 10.3390/drones8080364
Pamungkas, S. (2023). Analysis of vegetation index for ndvi, evi-2, and savi for mangrove forest density using google earth engine in Lembar Bay, Lombok Island. IOP Conf. Series: Earth Environ. Sci. 1127, 12034. doi: 10.1088/1755-1315/1127/1/012034
Pan, T., Ye, H., Zhang, X., Liao, X., Wang, D., Bayin, D., et al. (2024). Estimating aboveground biomass of grassland in central Asia mountainous areas using unmanned aerial vehicle vegetation indices and image textures – A case study of typical grassland in Tajikistan. Environ. Sustainability Indic. 22, 100345. doi: 10.1016/j.indic.2024.100345
Possoch, M., Bieker, S., Hoffmeister, D., Bolten, A., Schellberg, J., and Bareth, G. (2016). Multi-temporal crop surface models combined with the rgb vegetation index from uav-based images for forage monitoring in grassland. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLI-B1, 991–998. doi: 10.5194/isprs-archives-XLI-B1-991-2016
Pranga, J., Borra-Serrano, I., Quataert, P., De Swaef, T., Vanden Nest, T., Willekens, K., et al. (2024). Quantification of species composition in grass-clover swards using RGB and multispectral UAV imagery and machine learning. Front. Plant Sci. 15. doi: 10.3389/fpls.2024.1414181
Reinermann, S., Asam, S., and Kuenzer, C. (2020). Remote sensing of grassland production and management—A review. Remote Sens. 12, 1949. doi: 10.3390/rs12121949
Robnik-Šikonja, M. and Kononenko, I. (2003). Theoretical and empirical analysis of reliefF and RReliefF. Mach. Learn. 53, 23–69. doi: 10.1023/A:1025667309714
Rossi, M., Niedrist, G., Asam, S., Tonon, G., Tomelleri, E., and Zebisch, M. (2019). A comparison of the signal from diverse optical sensors for monitoring alpine grassland dynamics. Remote Sens. 11, 296. doi: 10.3390/rs11030296
Schucknecht, A., Seo, B., Krämer, A., Asam, S., Atzberger, C., and Kiese, R. (2022). Estimating dry biomass and plant nitrogen concentration in pre-Alpine grasslands with low-cost UAS-borne multispectral data – a comparison of sensors, algorithms, and predictor sets. Biogeosciences 19, 2699–2727. doi: 10.5194/bg-19-2699-2022
Shi, Y., Gao, J., Li, X., Li, J., dela Torre, D. M. G., and Brierley, G. J. (2021). Improved estimation of aboveground biomass of disturbed grassland through including bare ground and grazing intensity. Remote Sens. 13, 2105. doi: 10.3390/rs13112105
Smith, A. M. S., Drake, N. A., Wooster, M. J., Hudak, A. T., Holden, Z. A., and Gibbons, C. J. (2007). Production of Landsat ETM+ reference imagery of burned areas within Southern African savannahs: comparison of methods and application to MODIS. Int. J. Remote Sens. 28, 2753–2775. doi: 10.1080/01431160600954704
Stoline, M. R. (1981). The status of multiple comparisons: simultaneous estimation of all pairwise comparisons in one-way ANOVA designs. Am. Statistician 35, 134–141. doi: 10.1080/00031305.1981.10479331
Suzuki, T., Sugiyama, M., Kanamori, T., and Sese, J. (2009). Mutual information estimation reveals global associations between stimuli and biological processes. BMC Bioinf. 10, S52. doi: 10.1186/1471-2105-10-S1-S52
Tilly, N., Aasen, H., and Bareth, G. (2015). Fusion of plant height and vegetation indices for the estimation of barley biomass. Remote Sens. 7, 11449–11480. doi: 10.3390/rs70911449
Varela, S., Pederson, T., Bernacchi, C. J., and Leakey, A. D. B. (2021). Understanding growth dynamics and yield prediction of sorghum using high temporal resolution UAV imagery time series and machine learning. Remote Sens. 13, 1763. doi: 10.3390/rs13091763
Wang, Z., Ma, Y., Zhang, Y., and Shang, J. (2022). Review of remote sensing applications in grassland monitoring. Remote Sens. 14, 2903. doi: 10.3390/rs14122903
Wang, S., Tuya, H., Zhang, S., Zhao, X., Liu, Z., Li, R., et al. (2023). Random forest method for analysis of remote sensing inversion of aboveground biomass and grazing intensity of grasslands in Inner Mongolia, China. Int. J. Remote Sens. 44, 2867–2884. doi: 10.1080/01431161.2023.2210724
Wengert, M., Wijesingha, J., Schulze-Brüninghoff, D., Wachendorf, M., and Astor, T. (2022). Multisite and multitemporal grassland yield estimation using UAV-borne hyperspectral data. Remote Sens. 14, 2068. doi: 10.3390/rs14092068
Wu, W., Walczak, B., Massart, D. L., Heuerding, S., Erni, F., Last, I. R., et al. (1996). Artificial neural networks in classification of NIR spectral data: Design of the training set. Chemometrics Intelligent Lab. Syst. 33, 35–46. doi: 10.1016/0169-7439(95)00077-1
Xiang, S., Wang, S., Guo, Z., Wang, N., Jin, Z., Yu, F., et al. (2025a). Inversion of nitrogen concentration in crop leaves based on improved radiative transfer model. Comput. Electron. Agric. 239, 111017. doi: 10.1016/j.compag.2025.111017
Xiang, S., Wang, S., Jin, Z., Xiao, Y., Liu, M., Yang, H., et al. (2025b). RSPECT: A PROSPECT-based model incorporating the real structure of rice leaves. Remote Sens. Environ. 330, 114962. doi: 10.1016/j.rse.2025.114962
Xu, X., Liu, L., Han, P., Gong, X., and Zhang, Q. (2022). Accuracy of vegetation indices in assessing different grades of grassland desertification from UAV. Int. J. Environ. Res. Public Health 19, 16793. doi: 10.3390/ijerph192416793
Yu, F.-h., Bai, J.-c., Jin, Z.-y., Guo, Z.-h., Yang, J.-x., and Chen, C.-l. (2023). Combining the critical nitrogen concentration and machine learning algorithms to estimate nitrogen deficiency in rice from UAV hyperspectral data. J. Integr. Agric. 22, 1216–1229. doi: 10.1016/j.jia.2022.12.007
Yu, F., Feng, S., Du, W., Wang, D., Guo, Z., Xing, S., et al. (2020). A study of nitrogen deficiency inversion in rice leaves based on the hyperspectral reflectance differential. Front. Plant Sci. Volume 11. doi: 10.3389/fpls.2020.573272
Yu, F., Xu, C., Xiang, S., Bai, J., Jin, Z., Zhang, H., et al. (2025). Hyperspectral leaf reflectance simulation considering internal structure. Sci. Rep. 15, 13639. doi: 10.1038/s41598-025-98299-z
Yu, F., Zhang, H., Bai, J., Xiang, S., and Xu, T. (2024). Method for the hyperspectral inversion of the phosphorus content of rice leaves in cold northern China. Int. J. Agric. Biol. Eng. 17, 256–263. doi: 10.25165/j.ijabe.20241706.8464
Yue, J., Yang, G., Tian, Q., Feng, H., Xu, K., and Zhou, C. (2019). Estimate of winter-wheat above-ground biomass based on UAV ultrahigh-ground-resolution image textures and vegetation indices. ISPRS J. Photogrammetry Remote Sens. 150, 226–244. doi: 10.1016/j.isprsjprs.2019.02.022
Zhang, T., Bi, Y., and Xuan, C. (2024a). Convolutional transformer attention network with few-shot learning for grassland degradation monitoring using UAV hyperspectral imagery. Int. J. Remote Sens. 45, 2109–2135. doi: 10.1080/01431161.2024.2326042
Zhang, Z., Gong, J., Song, L., Zhang, S., Zhang, W., Dong, J., et al. (2024b). Adaptations of soil microbes to stoichiometric imbalances in regulating their carbon use efficiency under a range of different grazing intensities. Appl. Soil Ecol. 193, 105141. doi: 10.1016/j.apsoil.2023.105141
Zhang, A., Hu, S., Zhang, X., Zhang, T., Li, M., Tao, H., et al. (2021a). A handheld grassland vegetation monitoring system based on multispectral imaging. Agriculture 11. doi: 10.3390/agriculture11121262
Zhang, J., Qiu, X., Wu, Y., Zhu, Y., Cao, Q., Liu, X., et al. (2021b). Combining texture, color, and vegetation indices from fixed-wing UAS imagery to estimate wheat growth parameters using multivariate regression methods. Comput. Electron. Agric. 185, 106138. doi: 10.1016/j.compag.2021.106138
Zhang, H., Tang, Z., Wang, B., Meng, B., Qin, Y., Sun, Y., et al. (2022). A non-destructive method for rapid acquisition of grassland aboveground biomass for satellite ground verification using UAV RGB images. Global Ecol. Conserv. 33, e01999. doi: 10.1016/j.gecco.2022.e01999
Zhao, Y., Liu, Z., and Wu, J. (2020). Grassland ecosystem services: a systematic review of research advances and future directions. Landscape Ecol. 35, 793–814. doi: 10.1007/s10980-020-00980-3
Zhou, J., Yungbluth, D., Vong, C. N., Scaboo, A., and Zhou, J. (2019). Estimation of the maturity date of soybean breeding lines using UAV-based multispectral imagery. Remote Sens. 11. doi: 10.3390/rs11182075
Zhu, X., Bi, Y., Du, J., Gao, X., Zhang, T., Pi, W., et al. (2023). Research on deep learning method recognition and a classification model of grassland grass species based on unmanned aerial vehicle hyperspectral remote sensing. Grassland Sci. 69, 3–11. doi: 10.1111/grs.12379
Zwick, M., Cardoso, J. A., Gutiérrez-Zapata, D. M., Cerón-Muñoz, M., Gutiérrez, J. F., Raab, C., et al. (2024). Pixels to pasture: Using machine learning and multispectral remote sensing to predict biomass and nutrient quality in tropical grasslands. Remote Sens. Applications: Soc. Environ. 36, 101282. doi: 10.1016/j.rsase.2024.101282
Keywords: sandy grasslands, multi-source remote sensing, unmanned aerial vehicle, grazing intensity, spectral indices, feature selection
Citation: Guan Q, Jiang M, Du W, Chen X and Yan B (2025) Integrating UAV visible and multispectral imagery to assess grazing-induced vegetation responses in sandy grasslands. Front. Plant Sci. 16:1730583. doi: 10.3389/fpls.2025.1730583
Received: 23 October 2025; Accepted: 25 November 2025; Revised: 21 November 2025;
Published: 11 December 2025.
Edited by:
Yu Weiguo, Seoul National University, Republic of KoreaReviewed by:
Dayang Liu, Northeast Forestry University, ChinaFa Zhao, Anhui Polytechnic University, China
Copyright © 2025 Guan, Jiang, Du, Chen and Yan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Wen Du, ZHV3ZW5Ac3lhdS5lZHUuY24=
Wen Du2*