<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Remote Sensing | Multi- and Hyper-Spectral Imaging section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/remote-sensing/sections/multi--and-hyper-spectral-imaging</link>
        <description>RSS Feed for Multi- and Hyper-Spectral Imaging section in the Frontiers in Remote Sensing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-13T05:23:13.551+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1768049</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1768049</link>
        <title><![CDATA[Long-term retrieval (1998-2023) of colored dissolved organic matter absorption coefficient and dissolved organic carbon in the Mackenzie River-Beaufort Sea region using the Copernicus GlobColour product]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>María Sánchez-Urrea</author><author>Martí Galí</author><author>Marta Umbert</author><author>Carolina Gabarró</author><author>Eva De Andrés</author><author>Rafael Gonçalves-Araujo</author>
        <description><![CDATA[In the regime-shifting Arctic, organic carbon export from river watersheds is expected to rise due to changes in hydrological regimes and permafrost thawing, affecting coastal and shelf biogeochemistry. Ocean color remote sensing enables monitoring inaccessible areas like the Beaufort Sea, improving our knowledge of coastal dynamics and land-to-ocean transport of Chromophoric Dissolved Organic Matter (CDOM) and Dissolved Organic Carbon (DOC) during the ice-free season. While multi-sensor datasets provide consistent monitoring over extended periods, the viability of using merged L3 remote sensing reflectance (Rrs) to derive carbon-based regional products remains unexplored. This study uses the Copernicus-GlobColour multi-sensor merged Rrs to generate a long-term dataset of CDOM absorption at 443 nm, aCDOM(443), and DOC concentrations in the Mackenzie River–Beaufort Sea region. We employ a modified GIOP (Generalized Inherent Optical Properties) inversion algorithm to retrieve aCDOM(443); then, derive DOC using a region-specific aCDOM(443)-DOC relationship. Validation against in situ observations shows performance comparable to previous studies, with a MdAPD (r2) of 56% (0.55) for aCDOM(443) and 29% (0.70) for DOC. This novel dataset allows for a detailed analysis of riverine plume dynamics, interannual variability, and trends. Average summer plume extent ranges from 43,068 to 92,388 km2 and is significantly correlated with annual discharge (r = 0.6, p< 0.001). When compared to independent in situ DOC estimates (1999-2017) at the river mouth, our satellite product shows consistent variability patterns. Contrary to initial expectations, a significant decline of −0.017 m−1yr−1 and −3.40 μM yr−1 is observed over a 26-year period for aCDOM(443) and DOC, respectively, at the Mackenzie delta outflow, resulting from a slight increase (1998–2015) followed by a decrease (2016–2023). These results suggest that trends in DOC fluxes need to be assessed using longer time series while considering significant uncertainties in both retrieval accuracy and variable spatiotemporal coverage over multidecadal periods.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1759371</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1759371</link>
        <title><![CDATA[Estimating flavonoids using radiative transfer model inversion on imaging spectroscopy data]]></title>
        <pubdate>2026-03-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bongokuhle S’phesihle Sibiya</author><author>Moses Azong Cho</author><author>Onisimo Mutanga</author><author>John Odindi</author><author>Cecilia Masemola</author><author>Johannes Van Staden</author>
        <description><![CDATA[Assessing flavonoid content is crucial for understanding plant responses to biotic and abiotic stresses. However, pigments such as chlorophylls, carotenoids, and anthocyanins, absorb light between 410 and 430 nm, thus makes it difficult to quantify flavonoid content in this region. To improve the sensitivity of flavonoids have been developed, but they are largely empirical and lack transferability across environments. Therefore, this study compares the effectiveness of the PROSAIL-D radiative transfer model (RTM) and an empirical approach in mapping flavonoids using AVIRIS data in the Fynbos biome through indices. Results show that PROSAIL-D consistently outperformed the empirical method, with the spectral index FI417,693 achieving an R2 of 0.63 and RMSE of 0.24 (mg catechin (CAE)/g). The empirical approach with the same index yielded an R2 of 0.46 and RMSE of 5.11 mg CAE/g. Overall, the RTM model demonstrated superior accuracy, with RMSE values ranging from 0.24 to 1.70, compared to 5.11 to 6.28 for empirical methods. These findings highlight the potential of integrating AVIRIS data with PROSAIL for non-destructive, remote sensing-based assessment of flavonoids, offering a scalable approach for monitoring plant health and stress responses at the canopy level.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1710758</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1710758</link>
        <title><![CDATA[Case 2 Regional Coast Colour: a neural network-based framework for atmospheric correction and in-water retrievals across multiple ocean colour satellite sensors]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Dagmar Müller</author><author>Martin Hieronymi</author><author>Ana B. Ruescas</author><author>Marco Peters</author><author>Rüdiger Röttgers</author><author>Marcel König</author><author>Carole Lebreton</author><author>Kerstin Stelzer</author><author>Carsten Brockmann</author><author>Roland Doerffer</author>
        <description><![CDATA[Since the 1990s, Doerffer and Schiller have been developing physics-based neural network algorithms for analyzing ocean colour in satellite imagery of optically complex coastal waters. At its core, the approach uses neural networks to solve the inversions in various aspects of solar radiative transfer in both the atmosphere and water, including atmospheric correction, towards the estimation of inherent optical properties (IOPs) of the water constituents. Empirical bio-optical models are then applied to derive constituent concentrations from these IOPs. Over the years, this algorithm has evolved significantly and is now widely recognized as Case-2 Regional CoastColour (C2RCC), a trusted tool within the ocean colour research community. Originally designed for the MERIS sensor aboard ENVISAT, C2RCC is now the operational ground segment processor for generating Case-2 (complex) water products from Sentinel-3 OLCI data and from Sentinel-2 MSI data in the Copernicus Marine High Resolution Ocean Colour Service. Adaptations of the algorithm have also been developed for other satellite missions, including SeaWiFS, MODIS, VIIRS, Landsat OLI, and Sentinel-2 MSI. The C2RCC processor is freely accessible through the Sentinel Application Platform (SNAP). This article provides an overview of the background and evolution of the C2RCC algorithm, presenting validation results at coastal sites and in land waters alongside user performance evaluations analyzing the influence of system vicarious calibration gains. It highlights cases where the algorithm delivers reliable results as well as its limitations and areas for future improvement. In its current iteration for Sentinel-3 OLCI, C2RCC performs effectively, particularly in moderately absorbing or scattering Case-2 waters.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1810164</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1810164</link>
        <title><![CDATA[Editorial: Earth observations from the deep space: 10 years of the DSCOVR mission]]></title>
        <pubdate>2026-03-02T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>A. Lyapustin</author><author>A. Marshak</author><author>A. Szabo</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1691652</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1691652</link>
        <title><![CDATA[NISTAR measurements confirm basic aspects of EPIC-derived global-scale dayurnal variability in Earth’s reflected radiation]]></title>
        <pubdate>2026-02-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Andrew A. Lacis</author><author>Gary L. Russell</author><author>Barbara E. Carlson</author><author>Wenying Su</author><author>Yinan Yu</author>
        <description><![CDATA[A unique model/data comparison capability is made possible by the unique viewing geometry from NASA’s DSCOVR Mission Lissajous orbital location around the Lagrangian L1 point. The key point of this unique location is the large orbital inclination relative to the perpendicular of the Sun-Earth line-of-sight. This circumstance enables periodic Sun-Earth-Satellite phase angle shifts ranging from 2-degrees to 12-degrees with repeating ∼3-month periodicity. At such extreme phase angles, backscattered radiation for spherical cloud-top particles is strongly phase angle dependent, but not for irregular-shaped ice particles. Also key, are the near-hourly high-resolution EPIC images that have been converted to radiative solar fluxes by extensive use of ancillary satellite data and CERES-based ADMs. These EPIC-derived SW fluxes, integrated over the Earth’s sunlit hemisphere, constitute the EPIC Composite dataset of 1-day resolution global-scale reflected SW fluxes, which have been shown to agree well with CERES reflected SW fluxes. Using the EPIC data as a template, the DSCOVR satellite ephemeris enables aggregation of climate GCM run-time output over the sunlit hemisphere with the same viewing geometry as EPIC. Generating the GCM-equivalent global-scale SW flux dataset, together with the EPIC data, forms the basis for a new paradigm in model/data comparisons. The key advantages of this DSCOVR-style approach are the (1) identical space-time sampling with identical viewing geometry and complex, but identical averaging over the diurnal cycle between observations and climate GCM output data, (2) preservation of short-period variability at 1-day resolution due to the Earth’s rotation, and (3) self-consistent weather noise suppression by identical averaging over the sunlit hemisphere. Early examples of the EPIC data variability drew concerns from colleagues worried that the variability in the EPIC data might be modeling noise. There is no other way to resolve this concern but to find another data source that shows the same degree of variability. Definitive comparisons to NISTAR measurements presented in this study unequivocally confirm that the global EPIC-derived variability is indeed real, and not a data artifact.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1753296</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1753296</link>
        <title><![CDATA[SuperDove radiometric data assessment in coastal and inland waters]]></title>
        <pubdate>2026-02-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ilaria Cazzaniga</author><author>Ana I. Dogliotti</author><author>Susanne Kratzer</author><author>Frédéric Mélin</author>
        <description><![CDATA[The use of high-resolution data in aquatic applications increased significantly in the last decade with the launch of decametre-scale optical sensors. More recently, commercial very-high resolution (VHR) sensors, offering finer spatial and temporal resolutions, have shown the potential of complementing data from high-resolution missions. Planet SuperDove (SD), with a band-setting similar to the Copernicus Sentinel-2 MultiSpectral Instrument (S2-MSI), a 3-m spatial resolution and quasi-daily revisiting time, show the potential for widening water monitoring applications to smaller water basins, and finer-scale phenomena. However, the uncertainties in SD products need to be quantified, to assess their fitness-for-purpose for these applications. This work aims to provide uncertainty estimates for SD-derived aquatic remote sensing reflectance (RRS) in different water types, benefitting from the radiometric measurements of the AERONET-OC network. RRS was derived from both Surface Reflectance (SR) products, distributed by Planet, or from data processed with ACOLITE. The comparability between SD and S2-MSI products was also assessed comparing RRS and Rayleigh-corrected reflectance (RRC) from S2-MSI and SD. The results indicate generally low performance across all bands for both SD RRS products, except in the most turbid waters, and highlight the lack of a publicly available robust atmospheric correction processor for SD data for most optical water types. The comparison to S2-MSI shows promising results only when comparing RRC values, but differences still suggest issues associated with calibration and radiometry of the SD sensors. The results also highlight the need for a harmonization strategy to ensure consistent integration of these datasets within multi-source monitoring systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2026.1756531</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2026.1756531</link>
        <title><![CDATA[Moon observations from the Lagrange point L1 by the EPIC/DSCOVR spectrometer]]></title>
        <pubdate>2026-01-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nick Gorkavyi</author><author>Nickolay Krotkov</author><author>Alexander Marshak</author>
        <description><![CDATA[DSCOVR/EPIC, located at the Sun-Earth Lagrange point (L1) around 1.5 million kilometers away from Earth, can capture images of the near and far sides of the Moon in the multiple UV-VIS-NIR wavelengths. These observations were previously used only for calibration purposes. In this study, for the first time, images of the Moon taken by EPIC are treated as scientific data with a unique set of characteristics: 1. They were acquired under full-disk illumination of the Moon. 2. They were taken in 10 narrow wavelength bands—from the ultraviolet (317 nm) to the near-infrared (780 nm). 3. At each wavelength the entire lunar disk is imaged simultaneously. 4. The images can be oversampled to reduce noise levels and increase spatial resolution. These features of the lunar images allow the creation of high-quality maps of the far and near sides of the Moon in 10 quasi-monochromatic wavelength channels. These maps will serve as a reference for comparison with data from other satellites in lunar orbit. The study of multispectral images of the Moon presented in this paper reveals a significant mineralogical difference between the farside and the nearside of the Moon. We interpret the studied spectral features of the Moon as indicating an increased concentration of ilmenite (a titanium-iron oxide mineral, FeTiO3) on the nearside of the Moon, particularly in the Sea of Tranquility.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1692306</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1692306</link>
        <title><![CDATA[Coupling ecological concepts with an ocean-colour model: inversion modelling]]></title>
        <pubdate>2026-01-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Xuerong Sun</author><author>Robert J. W. Brewin</author><author>Shubha Sathyendranath</author><author>Giorgio Dall’Olmo</author><author>David Antoine</author><author>Ray Barlow</author><author>Astrid Bracher</author><author>Malika Kheireddine</author><author>Mengyu Li</author><author>Jaime Pitarch</author><author>Dionysios E. Raitsos</author><author>Fang Shen</author><author>Gavin H. Tilstone</author><author>Vincenzo Vellucci</author><author>Yuan Zhang</author>
        <description><![CDATA[Monitoring phytoplankton from space can help detect shifts in marine ecosystems, particularly under accelerating climate change. However, most existing ocean-colour chlorophyll-a (Chl-a) algorithms are empirical in nature, and do not explicitly consider any potential optical effects of shifts in phytoplankton community composition independent of a change in Chl-a. Similar ocean-colour signals may arise from different combinations of Chl-a and phytoplankton community composition. Revealing how phytoplankton are responding to environmental change using satellite data requires tackling this ambiguity. In previous work, we developed an Ocean Colour Modelling Framework (OCMF) to simulate ocean colour for varying Chl-a and phytoplankton size classes (PSCs). Here, we invert the OCMF to directly retrieve Chl-a, key inherent optical properties (IOPs), and PSCs, from satellite remote sensing reflectance and sea surface temperature (SST), accounting for deviations in non-algal particles (NAP) and coloured dissolved organic matter (CDOM) from assumed open ocean relationships with Chl-a. The model is validated using a global in situ dataset and shows stable performance across diverse oceanic conditions. Integrating ecological concepts into a bio-optical model may advance our ability to interpret long-term changes in phytoplankton community structure from space.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1712816</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1712816</link>
        <title><![CDATA[SMDe: enhancing hyperspectral image denoising through a spatial-spectral modulated network]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ke Yang</author><author>Weixiang Yuan</author><author>Beibei Fang</author><author>Huiwu Xia</author><author>Xiaoxue Xing</author><author>Weiwei Shang</author>
        <description><![CDATA[Hyperspectral image denoising is crucial for restoring effective information from noisy data and plays a significant role in various downstream applications. Deep learning based methods have become the mainstream research direction due to their ability to handle complex noise. However, the spatial feature extraction of most existing methods is not comprehensive enough, and the adoption of a fixed spectral reconstruction mode does not fully utilize the spectral information of the original image. To address these issues, we propose a Spatial-Spectral Modulated Network for hyperspectral image denoising (SMDe). It consists of a spatial feature extraction network and a spectral modulation module. For the spatial feature extraction, we construct a hybrid network that combines Mamba and Transformer layers, which can effectively capture global and local spatial information. For the spectral modulation module, we design a discrimination strategy that adaptively preserves or reconstructs spectral information from the original image spectra. This information is then used to modulate spatial features, enhancing the spectral fidelity of the denoised image. Experiments on synthetic datasets show that SMDe outperforms other advanced methods in most noisy scenarios, not only restoring image details but also maintaining excellent spectral consistency. In cross-dataset and real-data evaluations, the denoising results of SMDe also demonstrated strong competitiveness.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1703604</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1703604</link>
        <title><![CDATA[Coloured dissolved organic matter in a coastal arctic environment and the implications for dissolved organic carbon monitoring from Sentinel-2 MSI]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ximena Aguilar Vega</author><author>Dalin Jiang</author><author>Agneta Fransson</author><author>Melissa Chierici</author><author>Jose Luis Iriarte</author><author>Arne Kristoffersen</author><author>Carlos Cárdenas</author><author>Evangelos Spyrakos</author>
        <description><![CDATA[This study presents a rare, high-quality dataset of bio-geo-optical properties from an Arctic glacio-marine fjord (Kongsfjorden, Svalbard), collected during the critical spring melt and sea-ice transition period (April 2023). To our knowledge, this is the first study to utilise Sentinel-2 MSI to retrieve coloured dissolved organic matter (CDOM) and DOC in such an optically complex, high-latitude nearshore ecosystem during this season. Our findings directly address persistent challenges in Arctic remote sensing (RS). We first characterised the system’s bio-geo-optical properties, identifying CDOM as the primary light-absorbing constituent. We then demonstrated that existing atmospheric correction models (ACOLITE, C2RCC, POLYMER) perform poorly over this area, showing large errors. To overcome this, we established a regionally tuned empirical algorithm using Sentinel-2 MSI Rrs bands (490, 560, 665, and 704 nm) that provides accurate estimations of CDOM absorption (aCDOM (443)) from both in-situ and MSI data. Furthermore, we established new relationships between CDOM and DOC using our in-situ data. Applying these to MSI imagery revealed spatio-temporal dynamics: higher DOC concentrations characterised the outer fjord in spring, contrasting with higher concentrations observed at the inner-fjord glacial terminus in summer. This contribution provides a validated methodology and crucial recommendations for the RS of carbon in optically complex Arctic nearshore environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1640320</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1640320</link>
        <title><![CDATA[Gaia’s Crown: a deep space mirage seen from DSCOVR/EPIC during lunar transit]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Karin Blank</author><author>Jay Herman</author><author>Sarah Dangelo</author><author>Alexander Marshak</author><author>Andrew Tennenbaum</author>
        <description><![CDATA[The Earth Polychromatic Imaging Camera (EPIC), onboard the Deep Space Climate Observatory (DSCOVR) spacecraft, located at the Earth-Sun Lagrange 1 point, has captured a unique optical effect during lunar occultation named “Gaia’s Crown.” In EPIC images, the phenomenon appears as a small “flange” at the Earth–Moon contact when the Moon is roughly half below Earth’s limb; it is present in the visible and near-infrared channels but absent in the ultraviolet. Using atmospheric data and 3D, voxel-based ray tracing models, this effect was identified as a combination of atmospheric distortion and a complex mirage caused by variations in the Earth’s atmosphere. Additionally, it is shown that while satellites closer to the Earth can see a similar phenomenon, Gaia’s Crown presents unique distortion effects that demonstrate how EPIC’s vantage point at 1.5 million kilometers from Earth provides a different perspective on atmospheric optics.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1685883</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1685883</link>
        <title><![CDATA[Fully illuminated Jupiter disk albedo and limb darkening observed by DSCOVR-EPIC from the Earth–Sun Lagrange-1 orbit]]></title>
        <pubdate>2026-01-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jay Herman</author><author>Karin Blank</author>
        <description><![CDATA[Multispectral images of Jupiter were obtained by the Earth polychromatic imaging camera (EPIC) orbiting at the Earth–Sun Lagrange point 1 (L1) on 15 March 2016 and again on 5 June 2019 using a 30-cm Cassegrain telescope imaging on a 2,048 × 2,048 pixel detector with a 0.62° field of view. The images of Jupiter were obtained using 10 narrow bandpass filters (in the range of 317.5–779.5 nm) that were radiometrically calibrated and designed to have very little out-of-band transmissions. The EPIC instrument was carefully corrected for geometric stray-light effects, pixel non-uniformity (flat fielding), and etaloning (680–780 nm). The Jupiter images were contained in a small disk of diameter 43 pixels near the center of the detector. The resulting images had a spatial resolution of 4,900 km as well as showed clear evidence of limb darkening, the east-west bands, and the red spot of Jupiter. These results were compared with previous measurements from Jupiter filter images obtained by the Hubble space telescope from a ground-based filter instrument at the Tortugas Mountain Observatory operated by New Mexico State University and the portable filter device PlanetCam at Calar Alto Observatory in Spain. The EPIC estimates of the whole-disk albedo are in good agreement with previous high-spectral-resolution spectrometer results (from the European Southern Observatory in La Silla, Chile) in the visible and near-infrared wavelengths but are lower in five ultraviolet (UV) narrow bandpass filter channels (318–388 nm). A possible reason for this disagreement with the spectrometer-estimated UV albedo could be out-of-band stray light from the spectrometer grating. The EPIC observations from L1 have better spatial resolution than ground-based filter measurements and are expected to provide improved estimates of Jupiter’s limb darkening. Absorption by methane was considered during the measurements, and the current mixing ratio 2 × 10−3 is estimated to be insufficient to explain the decrease in albedo between 764 and 779.5 nm unless the reflecting cloud layer is at a pressure of two atmospheres.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1691948</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1691948</link>
        <title><![CDATA[Global-scale seasonal variability profiles of EPIC-derived vs. GISS ModelE-simulated all-cloud and ice-cloud fraction distributions]]></title>
        <pubdate>2025-12-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Gary L. Russell</author><author>Andrew A. Lacis</author><author>Barbara E. Carlson</author><author>Wenying Su</author><author>Juliet A. Pilewskie</author>
        <description><![CDATA[Detailed cloud information over the Earth’s sunlit hemisphere is an important EPIC-image biproduct stemming from reflected solar shortwave (SW) flux determination from EPIC-image backscattered radiances. Using MODIS and CERES satellite retrievals EPIC spectral radiances are transformed into pixel-level broadband radiances. Cloud property information gathered from low-Earth-orbit and geostationary retrievals coincident with EPIC-view geometry are selected. CERES angular distribution models (ADMs) are utilized to accomplish the EPIC radiances-to-flux conversion. Cloud, being the principal contributors to Earth’s planetary albedo, are also the controlling factor regulating the Earth’s global energy balance. The prime focus here is to study the global-scale variability of the terrestrial energy balance using global-scale EPIC-derived reflected fluxes and cloud property information obtained with daily time resolution, a unique capability specific only to DSCOVR Mission EPIC data acquired from the Lissajous orbital vantage point around the Lagrangian L1-point. One major sticking point in model/data comparisons is that climate GCMs and the real-world exhibit quasi-chaotic variability. Thus, cloud maps generated from climate GCM output, and satellite data retrievals, can only provide qualitative information in model/data comparisons. Global integration suppresses the ubiquitous weather noise, but issues with viewing geometry, diurnal cycle, and space-time resolution incompatibilities persist in model/data comparisons utilizing traditional monthly-mean GCM output formats and traditional monthly-mean satellite data displays. DSCOVR Mission EPIC data, coupled with DSCOVR satellite ephemeris-enabled GCM data aggregation provide a promising new approach. In this approach, integration over the sunlit hemisphere eliminates the quasi-chaotic weather noise, while assuring identical viewing geometry and consistent diurnal cycle sampling. The Earth’s rotation provides precise longitudinal alignment of the variability. Moreover, this approach also makes possible day-by-day model/data comparisons, and brings into model/data scrutiny relevant cloud process timescales that are otherwise excluded in traditional monthly-mean model/data comparisons. Results to-date show that DSCOVR Mission measurements from the Lagrangian L1 vantage point, including the use of ancillary and biproduct data assembled within this format, constitute a new and powerful capability for model/data variability profile comparisons operating with a 1-day time resolution.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1702834</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1702834</link>
        <title><![CDATA[Dimension reduction and entropy-based hyperspectral image visualization using hue, saturation, and brightness]]></title>
        <pubdate>2025-12-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vahe Atoyan</author><author>Thomas Bawin</author><author>Laura Jaakola</author><author>Anna Avetisyan</author>
        <description><![CDATA[Hyperspectral imaging (HSI) captures rich spectral data across hundreds of contiguous bands for diverse applications. Dimension reduction (DR) techniques are commonly used to map the first three reduced dimensions to the red, green, and blue channels for RGB visualization of HSI data. In this study, we propose a novel approach, HSBDR-H, which defines pixel colors by first mapping the two reduced dimensions to hue and saturation gradients and then calculating per-pixel brightness based on band entropy so that pixels with high intensities in informative bands appear brighter. HSBDR-H can be applied on top of any DR technique, improving image visualization while preserving low computational cost and ease of implementation. Across all tested methods, HSBDR-H consistently outperformed standard RGB mappings in image contrast, structural detail, and informativeness, especially on highly detailed urban datasets. These results suggest that HSBDR-H can complement existing DR-based visualization techniques and enhance the interpretation of complex hyperspectral data in practical applications. Tested in remote sensing applications involving urban and agricultural datasets, the method shows potential for broader use in other disciplines requiring high-dimensional data visualization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1644460</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1644460</link>
        <title><![CDATA[Deep learning-based Sentinel-2 super-resolution via channel attention and high-frequency feature enhancement]]></title>
        <pubdate>2025-12-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Khang Nguyen-Vi</author><author>Bao Bui-Quoc</author><author>Nidal Kamel</author>
        <description><![CDATA[IntroductionHigh-resolution satellite imagery is essential for environmental monitoring, land-use assessment, and disaster management, particularly in Southeast Asia—a region marked by ecological diversity, rapid urbanization, and climate vulnerability. However, the limited spatial resolution of several key spectral bands in widely used platforms such as Sentinel-2 constrains fine-scale analysis, especially in resource-limited contexts.MethodsTo overcome these limitations, we develop an enhanced deep learning–based super-resolution framework that extends the DSen2 architecture through two dedicated components: a High-Pass Frequency (HPF) enhancement layer designed to better recover fine spatial details, and a Channel Attention (CA) mechanism that adaptively prioritizes the most informative spectral bands. The model is trained and evaluated on a geographically diverse Sentinel-2 dataset covering 30 regions across Vietnam, serving as a representative case study for Southeast Asian landscapes.ResultsQuantitative evaluation using Root Mean Square Error (RMSE) shows that the proposed framework consistently outperforms bicubic interpolation and the original DSen2 model. The most substantial improvements are observed in the red-edge and shortwave infrared (SWIR) bands, which are critical for vegetation and land-surface analysis.DiscussionThe performance gains achieved by the proposed model translate into more accurate and operationally useful high-resolution imagery for downstream applications, including vegetation health monitoring, water resource assessment, and urban change detection. Overall, the method provides a scalable and computationally efficient approach for enhancing Sentinel-2 data quality, with Vietnam serving as a practical benchmark for broader deployment across Southeast Asia.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1690337</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1690337</link>
        <title><![CDATA[Performance of glint correction algorithms for Sentinel-3 OLCI data]]></title>
        <pubdate>2025-12-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Rejane S. Paulino</author><author>Vitor S. Martins</author><author>Cassia B. Caballero</author><author>Thainara M. A. Lima</author><author>Daniel A. Maciel</author><author>Julio C. P. Santos</author><author>Bingqing Liu</author>
        <description><![CDATA[Sentinel-3 (A/B) Ocean and Land Colour Imager (OLCI) provides daily global coverage and spectral quality for monitoring optical water quality indicators across diverse aquatic systems. Accurate retrieval of remote sensing reflectance (Rrs) from OLCI imagery requires a series of radiometric correction procedures. Specifically, glint correction algorithms are essential in accounting for the impact of specular reflections from sunlight and skylight at the air-water interface, which can distort the radiance measured at the satellite sensor. Despite its importance, the performance of glint correction algorithms remains underexplored for Sentinel-3 (A/B) OLCI imagery and represents a research gap for its application. In this study, we analyzed the principles and performance of three image-based sunglint correction algorithms and one skyglint correction method across varying intensities of glint effects, using 570 Sentinel-3 (A/B) OLCI imagery acquired between 2020 and 2024. Resulting Rrs retrievals were evaluated against the Aerosol Robotic Network for Ocean Color (AERONET-OC) observations at 11 coastal sites. All proposed sunglint correction methods improved Rrs retrievals compared to no glint correction over various optical water types. Among them, the combination of SCSh (i.e., a sunglint removal method designed for optically shallow waters) and SkyG (i.e., an analytical skyglint removal method) achieved the best overall performance, yielding the lowest absolute error (ε < 58%) and the smallest number of spectra that were significantly overcorrected (n = 99), However, challenges remain in the blue spectral range (400–490 nm), where the glint correction methods performed poorly compared to AERONET-OC observations, especially under medium and high-glint conditions. Moreover, glint-free images were overcorrected for all methods, highlighting the need for reliable glint detection and masking before applying corrections. Our findings demonstrated that while existing glint correction methods can significantly improve data quality under low and medium-glint conditions, the high-glint scenarios continue to pose difficulties. Addressing these limitations is essential to ensure the consistent and accurate use of the Sentinel-3 (A/B) OLCI data for aquatic monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1685415</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1685415</link>
        <title><![CDATA[A decade of global hourly aerosol observations from DSCOVR/EPIC using near-UV measurements]]></title>
        <pubdate>2025-11-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Omar Torres</author><author>Hiren Jethva</author><author>Changwoo Ahn</author><author>Vinay Kayetha</author>
        <description><![CDATA[The availability of air quality (AQ) and climate related satellite information from the Earth Polychromatic Imaging Camera (EPIC) on the Deep Space Climate Observatory (DSCOVR), at the Lagrange Point 1 (L1) orbital configuration significantly enhances the diurnal coverage over large areas of the Sunlit side of the globe. The simultaneous availability of L1 and Geostationary (GEO) observations, along with traditional low-Earth- orbit (LEO) satellite measurements, offers a unique opportunity for the integration of a truly global AQ-observing system. In this paper, we discuss the decadal aerosol record from near-UV observations made by the EPIC-DSCOVR sensor. The near-UV EPIC aerosol record (EPICAERUV) shows a large increase in the atmospheric carbonaceous aerosol load over the last decade. In addition to the well-known seasonally varying sources of carbonaceous aerosols produced by biomass burning, a new source of carbonaceous particulate associated primarily, but not exclusively, with wildfires in the northern hemisphere has emerged and has been consistently detected over the last 10 years by EPIC and other spaceborne sensors. Unlike biomass burning aerosols produced in tropical and sub-tropical regions and, generally, residing in the lower troposphere, carbonaceous aerosols from wildfires in mid and high-latitude regions often ascend to the middle and high troposphere and, in some instances, reach the lower stratosphere where their residence time is significantly longer than in the troposphere. Wildfires and severe dust storms over the last decade have been observed by EPIC on a global basis from Australia and Chile in the Southern Hemisphere to North America, Central Europe, Siberia, and China in the Northern Hemisphere. We show and document some of these prominent aerosol events using the EPICAERUV aerosol dataset.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1683919</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1683919</link>
        <title><![CDATA[Size matters: the influence of pixel resolution on DSCOVR/EPIC reflectance and cloud metrics]]></title>
        <pubdate>2025-11-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alfonso Delgado-Bonal</author><author>Alexander Marshak</author><author>Yuekui Yang</author>
        <description><![CDATA[Satellite-derived reflectance and cloud retrievals are highly sensitive to spatial scale. Coarser pixels exaggerate cloud fraction, bias optical thickness and height estimates because unresolved subpixel variability violates plane-parallel assumptions. Here, we use DSCOVR/EPIC Level-1 reflectance (317–780 nm) and Level-2 cloud products (binary cloud mask, effective cloud height, ice/liquid optical thickness) to quantify these effects. Full-disk images were down-sampled to eight resolutions of 1,024, 512, 256, 128, 64, 32, 16, and 8 pixels across the disk at ∼12, 25, 50, 100, 200, 400, 800, and 1,600 km per pixel, respectively. Reflectances were aggregated by simple averaging: cloud masks by five subpixel thresholds (≥1, ≥25, ≥50, ≥75, and 100% cloudy), and cloud height and optical thickness by mean values when ≥50% of subpixels were valid. Global means of reflectance, cloud fraction, cloud height, and optical thickness were then calculated at each scale and threshold. While reflectance averages remained constant to within 1% across all scales, the cloud fraction rose steeply under permissive thresholds as resolution coarsened. Mean cloud height and optical thickness also increased, reflecting the dominance of taller, thicker clouds in coarse-pixel averages. These results quantify resolution-driven biases in EPIC cloud products and underscore the value of high-resolution observations and heterogeneity-aware retrieval methods for robust cloud characterization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1677438</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1677438</link>
        <title><![CDATA[Simultaneous retrieval of aerosol optical depth, spectral absorption and layer height from DSCOVR EPIC using MAIAC algorithm]]></title>
        <pubdate>2025-10-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alexei Lyapustin</author><author>Myungje Choi</author><author>Yujie Wang</author><author>Sergey Korkin</author><author>Edward Hyer</author><author>Alexander Marshak</author>
        <description><![CDATA[A novel MAIAC algorithm is described for joint retrievals of the aerosol optical depth, spectral absorption and layer height (ALH) from DSCOVR EPIC observations in the UV-Vis-NIR spectral range including atmospheric oxygen A- and B-bands. While the oxygen bands have been used to estimate ALH in several existing algorithms, MAIAC for the first time employs a synergy between the UV and O2 A,B-bands to enhance sensitivity to the height of aerosol layer and retrieves it simultaneously with other major aerosol properties. The ALH retrieval capability is illustrated using several examples for smoke and dust aerosols over different parts of the globe. A global AERONET validation of aerosol properties based on the full EPIC data record (mid-2015–2025) shows an accuracy of AOD with correlation coefficient R ∼ 0.71-0.73, RMSE ∼ 0.4, and expected error EE ∼ 20%. While accuracy of AOD is moderate due to the backscattering view geometry of EPIC, achieved agreement of spectral single scattering albedo (SSA) at 443 and 680 nm with AERONET inversion data is very good: the expected error ± 0.03 agrees with AERONET uncertainty, the RMSE is within 0.02–0.03, and bias is within ±0.01. The ALH product was validated globally for the overlapping EPIC- CALIOP CALIPSO period using the CALIPSO total backscatter weighted height. The ALH validation shows a robust performance with global RMSE ∼ 1.1 km and 60%–77% of retrievals within EE = ±1 km. The retrieved ALH is lower than CALIOP ALHC by 0.45–0.75 km over land and is unbiased over the ocean. This new capability and suite of aerosol products, designed to support both the Earth system modeling and the air quality applications, are part of the version 3 MAIAC EPIC algorithm. The v3 algorithm has recently completed reprocessing of the EPIC record covering the period of 2015–2025.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsen.2025.1666123</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsen.2025.1666123</link>
        <title><![CDATA[SenFus-CHCNet: a multi-resolution fusion framework for sparse-supervised canopy height classification]]></title>
        <pubdate>2025-10-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bao Bui-Quoc</author><author>Khang Nguyen-Vi</author><author>Anh Vu-Duc</author><author>Nidal Kamel</author>
        <description><![CDATA[IntroductionAccurate forest canopy height mapping is critical for understanding ecosystem structure, monitoring biodiversity, and supporting climate change mitigation strategies.MethodsIn this paper, we present SenFus-CHCNet, a novel deep learning architecture designed to produce high-resolution canopy height classification maps by fusing multispectral (Sentinel-2) and synthetic aperture radar (SAR) (Sentinel-1) imagery with GEDI LiDAR data. The proposed model comprises two main components: a Multi-source and Multi-band Fusion Module that effectively integrates data of varying spatial resolutions through resolution-aware embedding and aggregation, and a Pixel-wise Classification Module based on a customized U-Net architecture optimized for sparse supervision. To discretize continuous canopy height values, we evaluate three classification schemes—coarse, medium, and fine-grained—each balancing ecological interpretability with model learning efficiency.ResultsExtensive experiments conducted over complex forested landscapes in northern Vietnam demonstrate that SenFus-CHCNet outperforms state-of-the-art baselines, including both convolutional and transformer-based models, achieving up to 4.5% improvement in relaxed accuracy (RA±1) and 10% gain in F1-score. Qualitative evaluations confirm that the predicted maps preserve fine-scale structural detail and ecologically meaningful spatial patterns, even in regions with sparse GEDI coverage.DiscussionOur findings highlight the effectiveness of deep fusion learning for canopy height estimation, particularly in resource-limited settings. SenFus-CHCNet provides a scalable and interpretable approach for forest monitoring at regional and national scales, with promising implications for biodiversity conservation, carbon accounting, and land-use planning.]]></description>
      </item>
      </channel>
    </rss>