<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Remote Sensing | Agro-Environmental Remote Sensing section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/remote-sensing/sections/agro-environmental-remote-sensing</link>
        <description>RSS Feed for Agro-Environmental Remote Sensing section in the Frontiers in Remote Sensing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-14T09:59:19.569+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1779561</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1779561</link>
        <title><![CDATA[Development of an in-season nitrogen application dose estimation algorithm for cotton using multispectral imaging-based nitrogen adequacy index]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>R. Raja</author><author>D. Kanjana</author><author>P. Nalayini</author><author>K. Rameash</author><author>G. Tamil Amutham</author><author>T. Arumuganathan</author><author>D. Blaise</author><author>Y. G. Prasad</author>
        <description><![CDATA[Managing within-field variability in cotton fields for precision nitrogen (N) management is difficult. The development of multispectral sensors and image data analytics offers a solution to addressing the issue. During the Kharif (monsoon) season of 2021 and 2022, field experiments were conducted at the ICAR-Central Institute for Cotton Research, Regional Station, Coimbatore, Tamil Nadu, involving seven N levels (25%, 50%, 75%, 100%, 125%, 150%, and 200% of the prescribed N-dose (80 kg ha−1)), along with a control (N0). Unmanned aerial systems-based multispectral crop imaging data were collected to develop an algorithm for calculating the N-dose for in-season variable-rate application. The multispectral images were processed for different crop growth stages, and treatment-wise mean normalized difference vegetation index (NDVI) and normalized difference red edge (NDRE) values were determined. The relationship between the mean NDRE index and estimated leaf N content exhibited statistically significant regression at 70 days and 95 days after emergence (DAE) during Kharif 2021 and 75 DAE and 90 DAE during Kharif 2022, respectively. Spatial nitrogen adequacy index (NAI) maps were created using the 95th percentile NDRE values (0.37 for Kharif 2021 and 0.27 for Kharif 2022). The N rate that maximized cotton yield was used to parameterize the NAI-based N-sufficiency response curve. The quadratic model (N rate (kg ha−1) = −191.25 (NAI)2 −22.32(NAI) + 138.5) produced the best fits, with a coefficient of determination (R2) value of 0.84. The developed algorithm can be used in preparing N zone maps for variable-rate N application and has the potential to optimize cotton’s in-season N requirement.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1711426</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1711426</link>
        <title><![CDATA[Spectral assessment of nutrient limitation in the savanna landscape: selection of spectral indices towards Sentinel-2 upscaling]]></title>
        <pubdate>2026-03-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nasiphi Ngcoliso</author><author>Abel Ramoelo</author><author>Philemon Tsele</author><author>Mcebisi Qabaqaba</author><author>Siyamthanda Gxokwe</author>
        <description><![CDATA[Nutrient limitations can significantly impact the ecosystem services provided by the savanna biome, potentially leading to degradation and reduced grazing capacity if not detected in time. A key indicator of growth-limiting nutrients is the Nitrogen to Phosphorus (N:P) ratio. However, grass foliar phosphorus content has rarely been studied in African savannas, especially using remote sensing approaches. As a result, there is limited information on the spatial distribution of nutrient limitations in these ecosystems. This study aimed to develop a Sentinel-2-based machine learning regression model to predict and map the distribution of the N:P ratio in the northern region of Kruger National Park (KNP), South Africa, which is dominated by the savanna rangeland biome. Fieldwork was conducted between 15 March and 30 April 2008 to collect grass samples and spectral data using an Analytical Spectral Device (ASD). The hyperspectral field data were then resampled to match the multispectral configuration of Sentinel-2 imagery. A Random Forest Regression (RFR) technique was applied to the simulated Sentinel-2 datasets to develop predictive models of the N:P ratio. Model accuracy was evaluated using the Root Mean Square Error (RMSE) Relative Root Mean Square Error (RRMSE), Percent Bias (PBIAS), and the coefficient of determination (R2). The results showed that vegetation indices (VIs), particularly the Normalized Difference Red Edge (NDRE) derived from Sentinel-2 bands B8 and B5, was optimal for estimating N:P ratio. This index explained over 80% of the N:P variability, with the lowest PBIAS of 0.02%. The best-performing model was used to map nutrient limitations across the study area using Sentinel-2 imagery. The spatial analysis indicated consistent nitrogen limitation and co-limitation across the investigated regions, with no evidence of phosphorus limitation. The high-accuracy models demonstrate the effectiveness of Sentinel-2 imagery for estimating nutrient limitations in heterogeneous savanna landscapes. This study offers a cost-effective, scalable tool for decision-makers involved in the management, sustainability, and restoration of the savanna biome. Future research should consider incorporating textural and environmental variables to enhance model performance and understanding of nutrient dynamics.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1730222</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1730222</link>
        <title><![CDATA[A multi-feature fusion based remote sensing inversion method for farmland shelterbelts]]></title>
        <pubdate>2026-02-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Qi Zhang</author><author>Yuncheng Zhou</author><author>Hongge Zhao</author><author>Wenhao Wu</author><author>Yuekun Huang</author>
        <description><![CDATA[Precise segmentation of farmland shelterbelts in high-resolution remote sensing imagery represents a crucial yet challenging task for establishing a quantifiable farmland quality evaluation system. The core difficulties arise from two principal issues: (1) effectively distinguishing cultivated land from shelterbelts with similar textural characteristics while suppressing interference from complex backgrounds such as roads and ditches; and (2) accurately segmenting narrow, elongated, and discontinuously distributed single-row shelterbelts with blurred boundaries. Conventional semantic segmentation methods, primarily designed for large-scale objects in natural scenes, generally underperform when confronted with the distinctive characteristics of remote sensing targets. To overcome these challenges, we propose a novel remote sensing inversion framework based on multi-feature fusion. For the first challenge, we designed a Multi-Feature Fusion Block (MFFB) that utilizes a Spatial Gated Fusion Mechanism (SGFM) to adaptively integrate global contextual features captured by Mamba-like linear attention, local details extracted through convolutional operators, and frequency-domain information obtained via Fast Fourier Transform (FFT), thereby significantly enhancing the model’s capacity to represent and discriminate complex features. To address the second challenge, we introduced a super-resolution preprocessing strategy along with a Multi-Scale Contextual feature Extraction (MSCE) module within an encoder-decoder architecture. The former effectively increases the pixel width of narrow shelterbelts through enhanced image detail reconstruction, while the latter ensures segmentation continuity for elongated features by integrating multi-scale contextual information. Experimental results on our self-constructed farmland shelterbelt dataset demonstrate that our method achieves segmentation accuracies of 96.42% for cultivated land and 82.83% for shelterbelts, outperforming both mainstream general-purpose semantic segmentation models and specialized remote sensing methods, thus validating the effectiveness of the proposed framework for precise farmland shelterbelt extraction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1669081</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1669081</link>
        <title><![CDATA[A low-cost MLS prototype for voxel-based above-ground biomass estimation in short-rotation plantations]]></title>
        <pubdate>2025-12-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Michal Skladan</author><author>Arunima Singh</author><author>Juliana Chudá</author><author>Martin Lieskovský</author><author>Matej Masný</author><author>Jozef Vyboštok</author>
        <description><![CDATA[Short-rotation plantations of fast-growing trees (FGT) offer a sustainable biomass source to mitigate climate change and boost rural energy self-sufficiency. Accurate estimation of woody above-ground biomass (AGB) is critical for efficient management and utilization of these plantations. This study evaluates modern mobile laser scanning (MLS) techniques for dry-weight AGB estimation, comparing a commercial MLS system with a low-cost prototype built on the Livox Mid-360 sensor. Research was carried out in a dense, second-rotation poplar clone plantation. Thirty-one research plots were scanned using both MLS setups, then harvested and oven-dried to obtain reference dry weights. Point clouds were processed via a voxel-based approach at four resolutions (5, 10, 15 and 20 cm) to develop regression models correlating total voxel volume with dry biomass. The low-cost prototype delivered its best performance at 5 cm voxel size (R2 = 0.84; rRMSE = 12.2%), markedly outperforming the commercial system at the same resolution (R2 = 0.68; rRMSE = 17.5%). The commercial MLS achieved its optimum at 20 cm voxels (R2 = 0.82; rRMSE = 12.9%). Predictive models were validated using 16 plots for training and 15 for testing. The prototype yielded the highest precision for dry weight prediction (R2 = 0.89; rRMSE = 12.9%) at 5 cm resolution, while the commercial MLS excelled in fresh-weight estimation at 15 cm resolution (R2 = 0.92; rRMSE = 12.0%). These results demonstrate that affordable MLS solutions can provide biomass estimates comparable to those of higher-cost systems for dry AGB assessment in high-density poplar stands. Implementing low-cost laser scanning improves monitoring frequency, reduces operational expenses, and enables large-scale application in short-rotation forestry. This approach supports evidence-based decision-making for sustainable bioenergy production. Future work will explore integrating multispectral data and automated processing pipelines to further enhance biomass estimation accuracy and scalability across diverse forest conditions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1620109</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1620109</link>
        <title><![CDATA[Synergizing BRDF correction and deep learning for enhanced crop classification in GF-1 WFV imagery]]></title>
        <pubdate>2025-07-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yuanwei Chen</author><author>Yang Li</author><author>Runze Li</author><author>Chongzheng Guo</author><author>Jilin Li</author>
        <description><![CDATA[Accurate crop classification is essential for agricultural management, resource allocation, and food security monitoring. GF-1 Wide Field View (WFV) imagery suffers from Bidirectional Reflectance Distribution Function (BRDF) effects due to large viewing angles (0°–48°), reducing crop classification accuracy. This study innovatively integrates BRDF correction with deep learning to address this. First, a BRDF correction method based on normalized difference vegetation index (NDVI) and anisotropy flat index (AFX) is developed to normalize radiometric discrepancies. Secondly, utilizing four spectral bands from WFV images along with three effective vegetation indices as feature variables, a multi-feature fusion deep learning classification system was constructed. Three typical deep learning architectures—Feature Pyramid Network (FPN), Fully Convolutional Network (FCN), and UNet, are employed to perform classification experiments. Results demonstrate that BRDF correction consistently improves accuracy across models, with UNet achieving the best performance: 95.02% overall accuracy (+0.65%), 0.9316 Kappa (+0.0088), and 91.29% mean IOU (+1.06%). The improved classification accuracy of mIoU (+2.31%) of FPN and OA (+2.11%) of FCN proves the necessity of BRDF correction. By integrating physical BRDF correction with deep learning techniques, this study establishes a new benchmark for precision crop mapping in large-viewing satellite imagery, thereby advancing scalable solutions for agricultural monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1581355</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1581355</link>
        <title><![CDATA[Enhancing vegetation monitoring: a proposal for a Sentinel-2 based vegetation health index]]></title>
        <pubdate>2025-06-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sandeep Kumar</author><author>Swarnendu Sekhar Ghosh</author><author>Dipankar Mandal</author><author>Avik Bhattacharya</author><author>Alok Porwal</author><author>L. Karthikeyan</author>
        <description><![CDATA[Vegetation serves as a vital carbon sink, crucial for regulating CO2 and O2 levels in the atmosphere. However, the declining health of vegetation can contribute to a rise in greenhouse gas emissions. Utilizing remote sensing satellite imagery, we can effectively monitor global changes in vegetation health in near-real time. Various vegetation indices have been developed to monitor specific biochemical properties. Yet, many of these indices fall short in detecting health deterioration caused by multiple stressors such as excessive heat, salinity, and water scarcity. Indices that are primarily sensitive to single-leaf parameters may not fully capture the complex stress responses in vegetation. To address this limitation, we introduce a novel vegetation health indicator: the Sentinel-2-based Vegetation Health Index (SVHI). This index is designed to detect stress-induced changes in chlorophyll, water, and protein content. It was validated using global sensitivity analysis (GSA) with physical models and laboratory-based spectroscopy experiments. We have performed a global sensitivity analysis utilizing radiative transfer models to support SVHI’s performance. It indicates strong sensitivity to variations in chlorophyll and water content. Following GSA, a lab-based spectroscopy experiment was conducted to detect the effect of water stress and chlorophyll stress on the vegetation indices. In experiment performed on water stress, SVHI demonstrated five and 1.1 times greater sensitivity than NDVI and NDMI respectively in the early stages of water loss (150%–85% leaf water content), confirmed by Tukey’s HSD test (p < 0.05). It was also observed that NDVI failed to show a statistically significant change during this period (p = 0.63). The experiment performed on the effect of chlorophyll revealed that NDMI could not detect chlorophyll degradation, while SVHI retained sensitivity throughout the chlorophyll decline. Further, we have performed a corn crop phenology analysis using Sentinel-2 data to confirm the effectiveness of SVHI. The analysis revealed that the proposed index successfully distinguishes characteristic changes in vegetation over time. In addition, as compared to NDMI, SVHI differentiates non-vegetated areas, such as water bodies, from vegetated areas. Finally, a temporal analysis of the vegetation indices reveals that SVHI is highly correlated with both NDVI (R2=0.958) and NDMI (R2=0.993), indicating its capability to capture variations in both greenness and moisture content.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1571149</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1571149</link>
        <title><![CDATA[A systematic review of remote sensing technologies and techniques for agricultural insect pest monitoring: lessons for locustana pardalina (Brown Locust) control in South Africa]]></title>
        <pubdate>2025-05-30T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Kuselwa Mpisane</author><author>Mahlatse Kganyago</author><author>Cilence Munghemezulu</author><author>Roger Price</author><author>Lwandile Nduku</author>
        <description><![CDATA[Insect pests are responsible for 20%–40% annual agricultural production losses globally, leading to an over-reliance on pesticides in farming practices. This has resulted in the overuse of pesticides which adversely affect the environment, human health, and natural resources. Integrated Pest Management has been utilized to enhance insect pest control, decrease the excessive use of pesticides, and enhance the output and quality of crops. The integration of remote sensing in pest management presents an alternative and cost effective tool to enhance insect pest monitoring and targeted management. This study provides a systematic review of remote sensing technologies for insect pest monitoring. The study analyzed 103 studies published between 2014 and 2024 indexed in Scopus and Web of Science databases. The results showed that insect pest monitoring studies using remote sensing increased annually in the past decade. Furthermore, findings revealed that MODerate resolution Imaging Spectroradiometer (MODIS), Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI) and The Thermal Infrared Sensor (TIRS) are mainly used sensors to detect and monitor the impact of insect pests on vegetation. Most studies reported that insect pests have been detected in forests and croplands, with newer sensors such as Sentinel-2 MultiSpectral Instrument and PlanetScope holding potential for systematic assessments in the future. United States of America and China are leading with insect pest monitoring research contributions. However, the analysis highlighted the lack of research contributions in South America and African countries, which highlight the need for increased research efforts on insects pest monitoring, particularly as they are increasingly impacting on food security and biodiversity in sub-Saharan Africa, where food insecurities are rife and biodiversity threatened by myriad of factors. Overall, recent advances in remote sensing emphasizes the need for more research incorporating new sensors and predictive modelling in monitoring and assessment of insect pest such as the notorious Brown Locust in South Africa.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1572114</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1572114</link>
        <title><![CDATA[Assessing SWOT interferometric SAR altimetry for inland water monitoring: insights from Lake Léman]]></title>
        <pubdate>2025-04-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Henri Bazzi</author><author>Nicolas Baghdadi</author><author>Yen-Nhi Ngo</author><author>Cassandra Normandin</author><author>Frédéric Frappart</author><author>Cecile Cazals</author>
        <description><![CDATA[Monitoring water levels is crucial for managing water resources and addressing climate change challenges. The new Surface Water and Ocean Topography (SWOT) mission provides unprecedented spatial and temporal resolution estimates of water surface elevations (WSEs) globally. This study evaluates the accuracy of SWOT WSE estimates over Lake Léman, Switzerland. We evaluated the SWOT L2-HR-Raster product from the calibration and nominal phases using in situ measurements of water levels and compared its performance with other missions, including Sentinel-3A (S3A), Sentinel-3B (S3B), Sentinel-6 (S6), and Global Ecosystem Dynamics Investigation (GEDI) altimetry. From over 141 acquisitions, SWOT achieved a root mean squared error (RMSE) ranging from 13 cm to 21 cm compared to in situ water levels, depending on the measurement quality reported in the product. Data flagged as good quality had an RMSE of 19 cm and a correlation coefficient (R) of 0.8, although these represented only 42% of the total measurements. When considering WSE estimates of all quality levels and applying a median outlier filter, the RMSE reaches 21 cm, with a correlation coefficient of 0.79, while retaining approximately 83% of the dataset. A consistent bias of −10 cm was observed across the time-series. An analysis of SWOT accuracy relative to instrumental parameters revealed that nadir and near-nadir acquisitions (viewing angle near 0°) exhibited very high uncertainty, with mean absolute differences from in situ water levels potentially exceeding 5 m. To explore the sources of errors in SWOT WSE, a random forest analysis showed that atmospheric perturbations had the most significant impact on the SWOT WSE estimation accuracy. These perturbations were linked to dry tropospheric delays affecting interferometric height measurements and atmospheric effects on the Ka-band sigma0 values. Compared to other missions, SWOT demonstrated slightly better accuracy than S3A, S3B, and S6, with an RMSE of 11 cm on a daily scale, compared to 13 cm, 18 cm, and 20 cm for these three Sentinel missions, respectively. All radar-based missions (S3A, S3B, S6, and SWOT) exhibited correlation coefficients exceeding 0.95 with in situ water levels. In contrast, GEDI LiDAR data showed the highest RMSE (46 cm), a bias of 27 cm, and a correlation coefficient of 0.45.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1520963</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1520963</link>
        <title><![CDATA[Advancing river flow monitoring with small uncrewed aircraft and simulation-driven development]]></title>
        <pubdate>2025-03-18T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Michael Dille</author><author>Massimo Vespignani</author><author>Jonathan Bruce</author><author>Uland Wong</author>
        <description><![CDATA[Current streamgaging processes for river flow rate estimation are typically slow and often hazardous, leading to inadequate coverage across national waterways. This paper presents a semi-autonomous aerial monitoring system that is designed for rapid river flow gaging, building upon a recently developed sensor package that is mounted beneath a small uncrewed aerial vehicle. This package consists of, among other instruments, a mid-wave infrared camera that can be used to detect minute thermal variations in the water surface, from which a particle image velocimetry algorithm is used to extract flow estimation. The design and testing of this sensor package and velocimetry algorithm for field evaluation are discussed, and a simulation environment facilitating the development of algorithms for automatic a priori and live-adaptive vehicle trajectory planning is presented. The simulation environment captures a physically based approximation of vehicle flight characteristics, contains digital terrain models of field test sites, and incorporates water surface flow maps generated from numerical flow simulation data and real-world measurements. Field and simulation results are presented validating the design of the sensor package and the use of simulation as a digital twin for aerial streamgaging development. This framework and the lessons learned to date lay the foundation for accelerated improvements in waterway measurement for both routine and disaster response purposes requiring rapid deployment in novel locations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1360572</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1360572</link>
        <title><![CDATA[Forestry climate adaptation with HarvesterSeasons service—a gradient boosting model to forecast soil water index SWI from a comprehensive set of predictors in Destination Earth]]></title>
        <pubdate>2024-12-20T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Mikko Strahlendorff</author><author>Anni Kröger</author><author>Golda Prakasam</author><author>Miriam Kosmale</author><author>Mikko Moisander</author><author>Heikki Ovaskainen</author><author>Asko Poikela</author>
        <description><![CDATA[Soil wetness forecasts on a local level are needed to ensure sustainable forestry operations during summer when the soil is neither frozen nor covered with snow. Training gradient boosting models has been successful in predicting satellite observation-based products into the future using Numerical Weather Prediction (NWP) and Earth Observation (EO) climate data as inputs. The Copernicus Global Land Monitoring Service’s Soil Water Index (SWI) satellite-based observations from 2015 to 2023 at 10,000 locations in Europe were used as the predictand (target parameter) to train an artificial intelligence (AI) model to predict soil wetness with XGBoost (eXtreme Gradient Boosting) and LightGBM (Light Gradient Boosting Machine) implementations of gradient boosting algorithms. The locations were selected as a representative set of points from the Land Use/Cover Area Frame Survey (LUCAS) sites, which helped evaluate the characteristics of distinct locations used in fitting to represent diverse landscapes across Europe. Over 40 predictors, mainly from ERA5-Land reanalysis, were used in the final model. Over 70 predictors were tested, including the climatology of EO based predictors like SWI and Leaf-Area Index (LAI). The final model achieved a mean absolute error of 5.5% and a root mean square error of 7% for variable values ranging from 0% to 100%, an accuracy sufficient for forestry use case. To further validate the model, SWI prediction was made using the 215-day seasonal forecast ensemble from April 2021, consisting of 51 members. With this, the quality could also be demonstrated in the way our forestry climate service (HarvesterSeasons.com) would use the forecasts. As soil wetness is not changing as rapidly as many weather parameters, the forecast skill appears to last longer for it than for the weather variables. The technology demonstration and machine learning work were conducted as a part of the HarvesterDestinE project, supported by European Union Destination Earth funding managed by the European Center for Medium-Range Weather Forecasts (ECMWF) contract DE_370d_FMI. The authors wish to acknowledge CSC – IT Center for Science, Finland, for computational resources. The code for the machine learning work and the predictions are available as open source at https://github.com/fmidev/ml-harvesterseasons (see README-SWI2). The training data and ML models are at https://destine.data.lit.fmi.fi/soilwater/. All data used for predictions are accessible from the SmartMet server at https://desm.harvesterseasons.com/grid-gui and the work flow is available in the script https://github.com/fmidev/harvesterseasons-smartmet/blob/master/bin/get-seasonal.sh Everything is made available for ensuring reproducibility. One will need to register and use their own https://cds.climate.copernicus.eu credentials for doing so.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1337953</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1337953</link>
        <title><![CDATA[Fine-scale surficial soil moisture mapping using UAS-based L-band remote sensing in a mixed oak-grassland landscape]]></title>
        <pubdate>2024-11-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Michelle Stern</author><author>Ryan Ferrell</author><author>Lorraine Flint</author><author>Melina Kozanitas</author><author>David Ackerly</author><author>Jack Elston</author><author>Maciej Stachura</author><author>Eryan Dai</author><author>James Thorne</author>
        <description><![CDATA[Soil moisture maps provide quantitative information that, along with climate and energy balance, is critical to integrate with hydrologic processes for characterizing landscape conditions. However, soil moisture maps are difficult to produce for natural landscapes because of vegetation cover and complex topography. Satellite-based L-band microwave sensors are commonly used to develop spatial soil moisture data products, but most existing L-band satellites provide only coarse scale (one to tens of kilometers grid size), information that is unsuitable for measuring soil moisture variation at hillslope or watershed-scales. L-band sensors are typically deployed on satellite platforms and aircraft but have been too large to deploy on small uncrewed aircraft systems (UAS). There is a need for greater spatial resolution and development of effective measures of soil moisture across a variety of natural vegetation types. To address these challenges, a novel UAS-based L-band radiometer system was evaluated that has recently been tested in agricultural settings. In this study, L-band UAS was used to map soil moisture at 3–50-m (m) resolution in a 13 square kilometer (km2) mixed grassland-forested landscape in Sonoma County, California. The results represent the first application of this technology in a natural landscape with complex topography and vegetation. The L-band inversion of the radiative transfer model produced soil moisture maps with an average unbiased root mean squared error (ubRMSE) of 0.07 m3/m3 and a bias of 0.02 m3/m3. Improved fine-scale soil moisture maps developed using UAS-based systems may be used to help inform wildfire risk, improve hydrologic models, streamflow forecasting, and early detection of landslides.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1414540</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1414540</link>
        <title><![CDATA[Remote estimation of leaf nitrogen content, leaf area, and berry yield in wild blueberries]]></title>
        <pubdate>2024-11-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kenneth Eteme Anku</author><author>David C. Percival</author><author>Rajasekaran Lada</author><author>Brandon Heung</author><author>Mathew Vankoughnett</author>
        <description><![CDATA[Nitrogen (N) fertilization is a major management requirement for wild blueberry fields. Its presence and estimation can be difficult given the perennial and heterogeneous nature of the plant, low N requirement, and residual N effects, resulting in the frequent over-application of N, excessive canopy growth, and resulting reduction in berry yields. Therefore, this study aimed to estimate nitrogen content and growth parameters using remote sensing approaches. Three trials were established in three commercial fields in Nova Scotia, Canada. An RCBD with 5 replicates and a plot size of 6 × 8 m with a 2 m buffer was used. Treatments consisted of 0, 20, 40, 60, and 100 kg N ha-1 of fertilizer. Using a DJI Matrice 300 UAV mounted with an RGB and a multispectral camera, aerial measurements were collected at 30 m altitude. Several field measurements including leaf nitrogen content (LNC), leaf area, floral bud numbers, stem height, and yield were conducted. Several vegetation indices (VIs) were computed for each plot, and correlation and regression analyses were conducted. Results indicated that treatments with high nitrogen rates had correspondingly high LAI measurements with the 60 kg ha-1 rate achieving the best growth parameters compared to the other treatments. LNC, LAI, and berry yield estimations using VIs [green leaf index (GLI), green red vegetation index (GRVI), and visible atmospheric red index (VARI)] produced significantly positive R2 values of 0.43, 0.48, and 0.30 respectively. Results from this study illustrated the potential of using VIs to estimate LNC, LAI, and berry yield parameters. It was established that the near-infrared VIs are the most effective in estimating differences in nitrogen rates, making them suitable for use in prescription maps for N fertilization applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1305991</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1305991</link>
        <title><![CDATA[Evaluating the potential for efficient, UAS-based reach-scale mapping of river channel bathymetry from multispectral images]]></title>
        <pubdate>2024-04-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Carl J. Legleiter</author><author>Lee R. Harrison</author>
        <description><![CDATA[Introduction: Information on spatial patterns of water depth in river channels is valuable for numerous applications, but such data can be difficult to obtain via traditional field methods. Ongoing developments in remote sensing technology have enabled various image-based approaches for mapping river bathymetry; this study evaluated the potential to retrieve depth from multispectral images acquired by an uncrewed aircraft system (UAS).Methods: More specifically, we produced depth maps for a 4 km reach of a clear-flowing, relatively shallow river using an established spectrally based algorithm, Optimal Band Ratio Analysis. To assess accuracy, we compared image-derived estimates to direct measurements of water depth. The field data were collected by wading and from a boat equipped with an echo sounder and used to survey cross sections and a longitudinal profile. We partitioned our study area along the Sacramento River, California, USA, into three distinct sub-reaches and acquired a separate image for each one. In addition to the typical, self-contained, per-image depth retrieval workflow, we also explored the possibility of exporting a relationship between depth and reflectance calibrated using data from one site to the other two sub-reaches. Moreover, we evaluated whether sampling configurations progressively more sparse than our full field survey could still provide sufficient calibration data for developing robust depth retrieval models.Results: Our results indicate that under favorable environmental conditions like those observed on the Sacramento River during low flow, accurate, precise depth maps can be derived from images acquired by UAS, not only within a sub-reach but also across multiple, adjacent sub-reaches of the same river.Discussion: Moreover, our findings imply that the level of effort invested in obtaining field data for calibration could be significantly reduced. In aggregate, this investigation suggests that UAS-based remote sensing could facilitate highly efficient, cost-effective, operational mapping of river bathymetry at the reach scale in clear-flowing streams.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1370697</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1370697</link>
        <title><![CDATA[A new framework for improving semantic segmentation in aerial imagery]]></title>
        <pubdate>2024-03-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shuke He</author><author>Chen Jin</author><author>Lisheng Shu</author><author>Xuzhi He</author><author>Mingyi Wang</author><author>Gang Liu</author>
        <description><![CDATA[High spatial resolution (HSR) remote sensing imagery presents a rich tapestry of foreground-background intricacies, rendering semantic segmentation in aerial contexts a formidable and vital undertaking. At its core, this challenge revolves around two pivotal questions: 1) Mitigating Background Interference and Enhancing Foreground Clarity. 2) Accurate Segmentation in Dense Small Object Cluster. Conventional semantic segmentation methods primarily cater to the segmentation of large-scale objects in natural scenes, yet they often falter when confronted with aerial imagery’s characteristic traits such as vast background areas, diminutive foreground objects, and densely clustered targets. In response, we propose a novel semantic segmentation framework tailored to overcome these obstacles. To address the first challenge, we leverage PointFlow modules in tandem with the Foreground-Scene (F-S) module. PointFlow modules act as a barrier against extraneous background information, while the F-S module fosters a symbiotic relationship between the scene and foreground, enhancing clarity. For the second challenge, we adopt a dual-branch structure termed disentangled learning, comprising Foreground Precedence Estimation and Small Object Edge Alignment (SOEA). Our foreground saliency guided loss optimally directs the training process by prioritizing foreground examples and challenging background instances. Extensive experimentation on the iSAID and Vaihingen datasets validates the efficacy of our approach. Not only does our method surpass prevailing generic semantic segmentation techniques, but it also outperforms state-of-the-art remote sensing segmentation methods.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2024.1351703</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2024.1351703</link>
        <title><![CDATA[Secure learning-based coordinated UAV–UGV framework design for medical waste transportation]]></title>
        <pubdate>2024-03-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Desh Deepak Sharma</author><author>Jeremy Lin</author>
        <description><![CDATA[A cost-effective solution with less human involvement must be developed for medical waste (MW) transportation. A learning-based coordinated unmanned aerial vehicle–unmanned ground vehicle (UAV–UGV) (CUU) framework, currently unavoidable use, with a transfer learning algorithm is suggested. A transfer learning algorithm is implemented for collision-free optimal path planning. In the framework, mobile ground robots collect medical waste from waste disposal centers through the pick-and-place technique. Then, networked drones lift the collected medical waste and fly through a predefined optimal trajectory. The framework considers the dynamic behavior of the environment and explores the actions for picking, placing, and dropping medical waste. A deep reinforcement learning mechanism has been incorporated for each successful or unsuccessful action by the framework to provide the rewards. With optimal policies, the coordinated UAV and UGV change their actions in dynamic conditions. An optimal cost of transportation of medical waste by the proposed framework is created by considering the weight of MW packets as the payload capacity of a CUU framework, the cost of steering the UAV and UGV, and the time required to transport the MW. The effectiveness of the CUU framework for MW transportation has been tested using MATLAB. The MW transportation data have been encrypted using an encryption key for security and authenticity.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2023.1095275</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2023.1095275</link>
        <title><![CDATA[UAS remote sensing applications to abrupt cold region hazards]]></title>
        <pubdate>2023-08-14T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Megan Verfaillie</author><author>Eunsang Cho</author><author>Lauren Dwyre</author><author>Imran Khan</author><author>Cameron Wagner</author><author>Jennifer M. Jacobs</author><author>Adam Hunsaker</author>
        <description><![CDATA[Unoccupied aerial systems (UAS) are an established technique for collecting data on cold region phenomenon at high spatial and temporal resolutions. While many studies have focused on remote sensing applications for monitoring long term changes in cold regions, the role of UAS for detection, monitoring, and response to rapid changes and direct exposures resulting from abrupt hazards in cold regions is in its early days. This review discusses recent applications of UAS remote sensing platforms and sensors, with a focus on observation techniques rather than post-processing approaches, for abrupt, cold region hazards including permafrost collapse and event-based thaw, flooding, snow avalanches, winter storms, erosion, and ice jams. The pilot efforts highlighted in this review demonstrate the potential capacity for UAS remote sensing to complement existing data acquisition techniques for cold region hazards. In many cases, UASs were used alongside other remote sensing techniques (e.g., satellite, airborne, terrestrial) and in situ sampling to supplement existing data or to collect additional types of data not included in existing datasets (e.g., thermal, meteorological). While the majority of UAS applications involved creation of digital elevation models or digital surface models using Structure-from-Motion (SfM) photogrammetry, this review describes other applications of UAS observations that help to assess risks, identify impacts, and enhance decision making. As the frequency and intensity of abrupt cold region hazards changes, it will become increasingly important to document and understand these changes to support scientific advances and hazard management. The decreasing cost and increasing accessibility of UAS technologies will create more opportunities to leverage these techniques to address current research gaps. Overcoming challenges related to implementation of new technologies, modifying operational restrictions, bridging gaps between data types and resolutions, and creating data tailored to risk communication and damage assessments will increase the potential for UAS applications to improve the understanding of risks and to reduce those risks associated with abrupt cold region hazards. In the future, cold region applications can benefit from the advances made by these early adopters who have identified exciting new avenues for advancing hazard research via innovative use of both emerging and existing sensors.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2023.1182973</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2023.1182973</link>
        <title><![CDATA[Erratum: Accuracy of UAV photogrammetry in glacial and periglacial alpine terrain: A comparison with airborne and terrestrial datasets]]></title>
        <pubdate>2023-03-20T00:00:00Z</pubdate>
        <category>Erratum</category>
        <author> Frontiers Production Office</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2022.1085808</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2022.1085808</link>
        <title><![CDATA[Spectral variability in fine-scale drone-based imaging spectroscopy does not impede detection of target invasive plant species]]></title>
        <pubdate>2023-01-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kelsey Huelsman</author><author>Howard Epstein</author><author>Xi Yang</author><author>Lydia Mullori</author><author>Lucie Červená</author><author>Roderick Walker</author>
        <description><![CDATA[Land managers are making concerted efforts to control the spread of invasive plants, a task that demands extensive ecosystem monitoring, for which unoccupied aerial vehicles (UAVs or drones) are becoming increasingly popular. The high spatial resolution of unoccupied aerial vehicles imagery may positively or negatively affect plant species differentiation, as reflectance spectra of pixels may be highly variable when finely resolved. We assessed this impact on detection of invasive plant species Ailanthus altissima (tree of heaven) and Elaeagnus umbellata (autumn olive) using fine-resolution images collected in northwestern Virginia in June 2020 by a unoccupied aerial vehicles with a Headwall Hyperspec visible and near-infrared hyperspectral imager. Though E. umbellata had greater intraspecific variability relative to interspecific variability over more wavelengths than A. altissima, the classification accuracy was greater for E. umbellata (95%) than for A. altissima (66%). This suggests that spectral differences between species of interest and others are not necessarily obscured by intraspecific variability. Therefore, the use of unoccupied aerial vehicles-based spectroscopy for species identification may overcome reflectance variability in fine resolution imagery.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2022.1038287</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2022.1038287</link>
        <title><![CDATA[UAV hyperspectral imaging for multiscale assessment of Landsat 9 snow grain size and albedo]]></title>
        <pubdate>2023-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>S. McKenzie Skiles</author><author>Christopher P. Donahue</author><author>Adam G. Hunsaker</author><author>Jennifer M. Jacobs</author>
        <description><![CDATA[Snow albedo, a measure of the amount of solar radiation that is reflected at the snow surface, plays a critical role in Earth’s climate and in regional hydrology because it is a primary driver of snowmelt timing. Satellite multi-spectral remote sensing provides a multi-decade record of land surface reflectance, from which snow albedo can be retrieved. However, this observational record is challenging to assess because discrete in situ observations are not well suited for validation of snow properties at the spatial resolution of satellites (tens to hundreds of meters). For example, snow grain size, a primary driver of snow albedo, can vary at the sub-meter scale driven by changes in aspect, elevation, and vegetation. Here, we present a new uncrewed aerial vehicle hyperspectral imaging (UAV-HSI) method for mapping snow surface properties at high resolution (20 cm). A Resonon near-infrared HSI was flown on a DJI Matrice 600 Pro over the meadow encompassing Swamp Angel Study Plot in Senator Beck Basin, Colorado. Using a radiative transfer forward modeling approach, effective snow grain size and albedo maps were produced from measured surface reflectance. Coincident ground observations were used for validation; relative to retrievals from a field spectrometer the mean grain size difference was 2 μm, with an RMSE of 12 μm, and the mean broadband albedo was within 1% of that measured near the center of the flight area. Even though the snow surface was visually homogenous, the maps showed spatial variability and coherent patterns in the freshly fallen snow. To demonstrate the potential for UAV-HSI to be used to improve validation of satellite retrievals, the high-resolution maps were used to assess grain size and albedo retrievals, and subpixel variability, across 17 Landsat 9 OLI pixels from a satellite overpass with similar conditions two days following the flight. Although Landsat 9 did not capture the same range of values and spatial variability as the UAV-HSI, on average the comparison showed good agreement, with a mean grain size difference of 9 μm and the same broadband albedo (86%).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2022.1027065</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2022.1027065</link>
        <title><![CDATA[Assessing UAV-based laser scanning for monitoring glacial processes and interactions at high spatial and temporal resolutions]]></title>
        <pubdate>2022-12-12T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Nathaniel R. Baurley</author><author>Christopher Tomsett</author><author>Jane K. Hart</author>
        <description><![CDATA[Uncrewed Aerial Vehicles (UAVs), in combination with Structure from Motion (SfM) photogrammetry, have become an established tool for reconstructing glacial and ice-marginal topography, yet the method is highly dependent on several factors, all of which can be highly variable in glacial environments. However, recent technological advancements, related primarily to the miniaturisation of new payloads such as compact Laser Scanners (LS), has provided potential new opportunities for cryospheric investigation. Indeed, UAV-LS systems have shown promise in forestry, river, and snow depth research, but to date the method has yet to be deployed in glacial settings. As such, in this study we assessed the suitability of UAV-LS for glacial research by investigating short-term changes in ice surface elevation, calving front geometry and crevasse morphology over the near-terminus region of an actively calving glacier in southeast Iceland. We undertook repeat surveys over a 0.1 km2 region of the glacier at sub-daily, daily, and weekly temporal intervals, producing directly georeferenced point clouds at very high spatial resolutions (average of >300 points per m−2 at 40 m flying height). Our data has enabled us to: 1) Accurately map surface elevation changes (Median errors under 0.1 m), 2) Reconstruct the geometry and evolution of an active calving front, 3) Produce more accurate estimates of the volume of ice lost through calving, and 4) Better detect surface crevasse morphology, providing future scope to extract size, depth and improve the monitoring of their evolution through time. We also compared our results to data obtained in parallel using UAV-SfM, which further emphasised the relative advantages of our method and suitability in glaciology. Consequently, our study highlights the potential of UAV-LS in glacial research, particularly for investigating glacier mass balance, changing ice dynamics, and calving glacier behaviour, and thus we suggest it has a significant role in advancing our knowledge of, and ability to monitor, rapidly changing glacial environments in future.]]></description>
      </item>
      </channel>
    </rss>