<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="brief-report">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2019.00042</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Brief Research Report</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>DeepEmSat: Deep Emulation for Satellite Data Mining</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Duffy</surname> <given-names>Kate</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/735510/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Vandal</surname> <given-names>Thomas</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/727551/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Shuang</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Ganguly</surname> <given-names>Sangram</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/101822/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Nemani</surname> <given-names>Ramakrishna</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ganguly</surname> <given-names>Auroop R.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/600759/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Sustainability and Data Sciences Laboratory, Department of Civil and Environmental Engineering, Northeastern University</institution>, <addr-line>Boston, MA</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>Ames Research Center, NASA</institution>, <addr-line>Mountain View, CA</addr-line>, <country>United States</country></aff>
<aff id="aff3"><sup>3</sup><institution>Bay Area Environmental Research Institute</institution>, <addr-line>Petaluma, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Ranga Raju Vatsavai, North Carolina State University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Fang Jin, Texas Tech University, United States; Lidan Shou, Zhejiang University, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Auroop R. Ganguly <email>a.ganguly&#x00040;northeastern.edu</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Data Mining and Management, a section of the journal Frontiers in Big Data</p></fn></author-notes>
<pub-date pub-type="epub">
<day>10</day>
<month>12</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="collection">
<year>2019</year>
</pub-date>
<volume>2</volume>
<elocation-id>42</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>05</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>11</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2019 Duffy, Vandal, Li, Ganguly, Nemani and Ganguly.</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>Duffy, Vandal, Li, Ganguly, Nemani and Ganguly</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>The growing volume of Earth science data available from climate simulations and satellite remote sensing offers unprecedented opportunity for scientific insight, while also presenting computational challenges. One potential area of impact is atmospheric correction, where physics-based numerical models retrieve surface reflectance information from top of atmosphere observations, and are too computationally intensive to be run in real time. Machine learning methods have demonstrated potential as fast statistical models for expensive simulations and for extracting credible insights from complex datasets. Here, we develop DeepEmSat: a deep learning emulator approach for atmospheric correction, and offer comparison against physics-based models to support the hypothesis that deep learning can make a contribution to the efficient processing of satellite images.</p></abstract> 
<kwd-group>
<kwd>remote sensing</kwd>
<kwd>machine learning</kwd>
<kwd>deep learning</kwd>
<kwd>atmospheric correction</kwd>
<kwd>emulator</kwd>
</kwd-group>
<contract-sponsor id="cn001">Division of Computing and Communication Foundations<named-content content-type="fundref-id">10.13039/100000143</named-content></contract-sponsor>
<contract-sponsor id="cn002">Division of Information and Intelligent Systems<named-content content-type="fundref-id">10.13039/100000145</named-content></contract-sponsor>
<contract-sponsor id="cn003">Ames Research Center<named-content content-type="fundref-id">10.13039/100006195</named-content></contract-sponsor>
<counts>
<fig-count count="2"/>
<table-count count="3"/>
<equation-count count="2"/>
<ref-count count="25"/>
<page-count count="8"/>
<word-count count="4761"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Contemporary satellite remote sensing is responsible for contributing Earth science data to public repositories at an unprecedented volume (Overpeck et al., <xref ref-type="bibr" rid="B17">2011</xref>). This abundant data has drawn interest to applying machine learning (ML) for data mining (Castelluccio et al., <xref ref-type="bibr" rid="B1">2015</xref>; Xie et al., <xref ref-type="bibr" rid="B24">2016</xref>; Mou et al., <xref ref-type="bibr" rid="B16">2017</xref>), climate data downscaling (Vandal et al., <xref ref-type="bibr" rid="B21">2017</xref>), and to advance process understanding in Earth sciences (Reichstein et al., <xref ref-type="bibr" rid="B18">2019</xref>). These emerging success stories suggest that machine learning has potential for extracting credible insights from complex datasets in multiple domains.</p>
<p>Land surface products such as crop forecasts, vegetation indices, snow cover, and burned area are derived from a basic parameter termed surface reflectance (SR). SR is a characteristic of the Earth&#x00027;s surface and is produced from raw, top of atmosphere (TOA) observations by removing the effects of atmospheric scattering and absorption. This process, termed atmospheric correction (AC) allows greater comparability between observations across space and time. However, physically based numerical models for atmospheric correction are too computationally intensive to be calculated in real time, relying instead on look-up tables with precomputed values. Additionally, atmospheric correction models must be tuned for new sensors, which may have short operational lifespans.</p>
<p>Here, we examine the hypothesis that deep learning can make contributions to the efficient processing of satellite data. We develop an experiment in atmospheric correction and present results to suggest that a deep learning model can be trained to emulate a complex physical process. Results are presented to demonstrate the emulator&#x00027;s stable retrieval of surface reflectance when validated against traditional physics-based models.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and Methods</title>
<sec>
<title>2.1. Related Work</title>
<sec>
<title>2.1.1. Atmospheric Correction</title>
<p><xref ref-type="fig" rid="F1">Figure 1</xref> provides a schematic drawing of radiative transfer processes in the atmosphere. Non-learning approaches to AC use physical modeling and empirical relationships to retrieve surface reflectance from observations contaminated by atmospheric scattering and absorption processes that occur in the paths between the sun, the Earth&#x00027;s surface, and the satellite sensor.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Physics-based atmospheric correction algorithms simulate reflection and scatting processes at the Earth&#x00027;s surface and in the atmosphere. <bold>(B)</bold> Architecture of the emulator model, a modified ResNet with n residual blocks and N hidden units.</p></caption>
<graphic xlink:href="fdata-02-00042-g0001.tif"/>
</fig>
<p>The algorithm used to derive MOD09GA, the daily SR product from NASA&#x00027;s Moderate Resolution Imaging Spectroradiometer (MODIS) corrects for gases, aerosols and Raleigh scattering. Due to prohibitive computational complexity, MOD09GA relies on look-up tables for aerosol retrieval and for precomputed SR retrieved according to atmospheric conditions (Vermote and Kotchenova, <xref ref-type="bibr" rid="B22">2008</xref>). MAIAC is a newer algorithm that uses time series and spatial analysis to detect clouds, retrieve aerosol thickness and retrieve SR (Lyapustin et al., <xref ref-type="bibr" rid="B13">2011a</xref>,<xref ref-type="bibr" rid="B14">b</xref>, <xref ref-type="bibr" rid="B15">2012</xref>). MAIAC uses two algorithms, depending on whether the observation area is stable or undergoing rapid change, as classified by the change detection algorithm (Lyapustin et al., <xref ref-type="bibr" rid="B15">2012</xref>). These approaches, and other state-of the art approaches including Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) rely on sensor calibration and retrieval of accurate atmospheric conditions (Cooley et al., <xref ref-type="bibr" rid="B2">2002</xref>).</p>
</sec>
<sec>
<title>2.1.2. Machine Learning in Remote Sensing</title>
<p>ML techniques have been applied to remote sensing with results that enhance upon non-learning methods. ML has been used to implement empirical bias corrections to MODIS measurements (Lary et al., <xref ref-type="bibr" rid="B9">2009</xref>). In atmospheric correction, a Support Vector Machine (SVM) has been used to predict SR from TOA reflectance with good agreement between reflectance products retrieved from the ML method and from two radiative transfer models (Zhu et al., <xref ref-type="bibr" rid="B25">2018</xref>). This approach trains a separate model for each band.</p>
<p>Prior work also has blended data produced by multiple satellites to obtain synthetic images with enhanced spatial or temporal resolution (Gao et al., <xref ref-type="bibr" rid="B4">2006</xref>). Convolutional Neural Networks (CNN) have been used in remote sensing for tasks such as land cover classification, object detection and precipitation downscaling, which make use of local correlation structures (Castelluccio et al., <xref ref-type="bibr" rid="B1">2015</xref>; Long et al., <xref ref-type="bibr" rid="B12">2017</xref>; Mou et al., <xref ref-type="bibr" rid="B16">2017</xref>; Vandal et al., <xref ref-type="bibr" rid="B21">2017</xref>).</p>
<p>Outside of the remote sensing domain, CNNs have been used for style transfer, where image content is preserved and image texture is modified (Gatys et al., <xref ref-type="bibr" rid="B5">2016</xref>). This problem has similarities to the problem of atmospheric correction, in which we wish to preserve semantic structure of the image while applying some effect. In atmospheric correction, this includes reversing the blue shift and reducing the blurring caused by passage through the atmosphere.</p>
</sec>
<sec>
<title>2.1.3. Deep Residual Networks</title>
<p>Deep CNNs can reach an accuracy saturation, where increasing depth is associated with decreasing training accuracy (He et al., <xref ref-type="bibr" rid="B6">2015</xref>). It is understood that a stack of nonlinear layers has a difficult time learning an identity mapping, thus a difficult time preserving the resolution of images. He et al. introduced deep residual nets (ResNets) in 2015. ResNets outperform state of the art methods in several image recognition competitions and are believed to be generalizable to vision and non-vision tasks.</p>
</sec>
</sec>
<sec>
<title>2.2. Datasets</title>
<p>Data from two satellites are used in this experiment: Terra and Himawari-8. Terra is low earth orbit (LEO) satellite carrying the MODIS sensor. Terra travels in a north-south direction, passing over the poles and crossing the equator at a near- orthogonal angle. As the Earth rotates, the satellite scans the Earth&#x00027;s surface over a span of hours to days. The Japan Meteorological Agency geostationary (GEO) satellite Himawari-8 carries the Advanced Himawari Imager (AHI) sensor, which has similar spectral characteristics to MODIS. In contrast to LEO satellites, GEO satellites orbit in the same direction as the Earth&#x00027;s rotation, staying &#x0201C;stationary&#x0201D; when viewed from the Earth&#x00027;s surface. GEO satellites orbit at a higher altitude than LEO satellites, but have the capacity to observe locations within their view with sub-hourly frequency.</p>
<p>The Advanced Himawari Imager TOA reflectance described below comprises the input to the emulator model. Surface reflectance produced from Terra&#x00027;s MODIS is the target for prediction. An Advanced Himawari Imager SR product, also calibrated against MODIS SR, provides performance comparison with a physically-based model.</p>
<sec>
<title>2.2.1. AHI TOA Reflectance</title>
<p>To prepare TOA reflectance, raw scans from Himawari-8 are georeferenced and assembled into a gridded format. Pixel values are converted to TOA reflectance according to the Himawari-8/9 Himawari Standard Data User&#x00027;s Guide, Version 1.2 (Japan Meteorological Agency, <xref ref-type="bibr" rid="B7">2015</xref>). The resulting full disk TOA is reprojected into geographic (latitude-longitude) projection with a 120&#x000B0; by 120&#x000B0; extent and 0.01&#x000B0; resolution. The domain, extending from 85&#x000B0;E to 155&#x000B0;W and 60&#x000B0;N to 60&#x000B0;S, is divided into 6&#x000B0; by 6&#x000B0; tiles. Full disk observations are repeated every 10 min. This gridded product (HM08_AHI05) is publicly available (<ext-link ext-link-type="uri" xlink:href="https://www.geonex.org/">https://www.geonex.org/</ext-link>). Four bands&#x02014;blue, green, red, and near infrared (NIR)&#x02014;are selected from AHI TOA data (<xref ref-type="table" rid="T1">Table 1</xref>). The data is treated as a multi-channel image, concatenated with two additional channels of solar zenith and solar azimuth angles.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Summary of target SR bands: MODIS Terra Level 1B and AHI-12 SR product.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center" colspan="2" style="border-bottom: thin solid #000000;"><bold>Center wavelength (nm)</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Band</bold></th>
<th valign="top" align="center"><bold>MODIS Terra</bold></th>
<th valign="top" align="center"><bold>AHI</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Blue</td>
<td valign="top" align="center">470</td>
<td valign="top" align="center">471</td>
</tr>
<tr>
<td valign="top" align="left">Green</td>
<td valign="top" align="center">555</td>
<td valign="top" align="center">510</td>
</tr>
<tr>
<td valign="top" align="left">Red</td>
<td valign="top" align="center">648</td>
<td valign="top" align="center">639</td>
</tr>
<tr>
<td valign="top" align="left">NIR</td>
<td valign="top" align="center">858</td>
<td valign="top" align="center">857</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Center wavelengths are shifted slightly due to the different sensors</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>2.2.2. MODIS Terra Surface Reflectance</title>
<p>MOD09GA is a seven-band surface reflectance product computed from Terra MODIS sensor (Vermote and Kotchenova, <xref ref-type="bibr" rid="B22">2008</xref>). This MODIS SR product, which is validated with ground observations, provides a standard for the calibration of other atmospheric correction algorithms (Liang et al., <xref ref-type="bibr" rid="B10">2002</xref>). Four bands from MOD09GA, corresponding to four AHI bands, are selected based on available spatial resolution (<xref ref-type="table" rid="T1">Table 1</xref>). MOD09GA is resampled from the distributed 1km x 1km sinusoidal projection to a 0.01&#x000B0; geographic projection, described above, using GIS tools.</p>
</sec>
<sec>
<title>2.2.3. AHI Surface Reflectance</title>
<p>GEO surface reflectance is retrieved from AHI TOA reflectance using the MAIAC algorithm (henceforth referred to as MAIAC SR). MAIAC is a semi-empirical algorithm originally developed for MODIS and adapted to perform SR retrievals for Himwari-8 AHI. Performance of MAIAC is evaluated by comparison with MOD09GA. The projection and resolution are identical to AHI TOA reflectance, described above. This product (HM08_AHI12) is released as a provisional product and is available upon request (<ext-link ext-link-type="uri" xlink:href="https://www.geonex.org/">https://www.geonex.org/</ext-link>).</p>
<p>All data belongs to a 3 month period of December 2016 through February 2017. We use observations over the Australian continent. This landmass is chosen as it provides a large contiguous landmass with a variety of land cover classes with which to train a flexible emulator. Where satellite images are affected by missing pixels due to clouds, aerosols, and water bodies, we select images for training and testing only if they contain 80% valid pixels or greater. We create and apply a composite mask to standardize valid pixels between all images from the same date. Furthermore, all reflectances are normalized to intensity between 0 and 1.</p>
</sec>
</sec>
<sec>
<title>2.3. Proposed Method</title>
<p>In this section we introduce a residual neural network to predict MODIS-like multispectral SR from TOA reflectance and solar geometry. This emulator model is trained with MODIS SR as the target, with the objective of emulating the MOD09GA atmospheric correction algorithm.</p>
<sec>
<title>2.3.1. Network Architecture</title>
<p>We modify ResNets with long and short skip connections, as defined by He et al. and as depicted in <xref ref-type="fig" rid="F1">Figure 1</xref> (He et al., <xref ref-type="bibr" rid="B6">2015</xref>). In this modified architecture, patch dimensions (width and height) are preserved throughout the network, as only relatively local information is necessary to retrieve pixelwise SR. Input patches are treated as six channel images, with four wavelength bands and two solar angle bands. Output images are four channel images with four wavelength bands.</p>
<p>ResNets and CNNs with varying numbers of residual blocks and hidden units are trained to determine the optimal architecture for this application. Models with partial convolutions (ResNet-P and CNN-P) and without partial convolutions (ResNet and CNN) are tested.</p>
</sec>
<sec>
<title>2.3.2. Partial Convolutions</title>
<p>Missing pixel values pose a processing problem in CNNs. When they fall within the convolutional window centered around a neighboring pixel, missing values create anomalous output, or edge effects. Partial convolutions offer a semantically aware method for normalization of output values that performs well on irregularly shaped holes. In this method, a binary mask is used to calculate scaling factors that adjust outputs according to the number of valid inputs.</p>
<p>Given <bold>M</bold>, a binary mask denoting the positions of valid and invalid pixels, <italic>x</italic>, the values in the sliding convolution window, and <bold>W</bold>, the convolution window weights, the output of the partial convolution layer is defined as:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:msup><mml:mi>x</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>W</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x000B7;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>M</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>M</mml:mi></mml:mstyle><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>if&#x02009;sum</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mtext>M</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mtext>0</mml:mtext></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mn>0</mml:mn></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mtext>if&#x000A0;sum(M)&#x000A0;=&#x000A0;0</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Convolutions with some or all valid pixels in the window are properly weighted and accepted as a valid response; convolutions with no valid pixels are not accepted. In each step, the binary mask is updated where a valid response was made, progressively shrinking holes. We adapt partial convolutions to prevent ill effects in processing satellite images with missing pixels due to clouds, cloud shadows and water bodies. Partial convolutions also have the desirable effect of eliminating edge effects in patches when used in combination with zero padding.</p>
<p>Partial convolutions are implemented in TensorFlow using the convolutional operation described by Liu et al. (<xref ref-type="bibr" rid="B11">2018</xref>). While partial convolutions shrink holes in images, our approach reapplies the original mask to the model output. This preserves the interpretability of the results, as ground truth surface reflectance values are not available for all missing pixels that are inferred through inpainting. We compare the results of both models with partial and regular convolutions.</p>
</sec>
<sec>
<title>2.3.3. Implementation Details</title>
<sec>
<title>2.3.3.1. Loss Function</title>
<p>A mean square error loss function with weight regularization is employed to learn the regression based convolutional neural network written as</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mi>&#x02112;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x00398;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover></mml:mstyle><mml:msup><mml:mrow><mml:mo stretchy='true'>(</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x00398;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='true'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mo>&#x003BB;</mml:mo><mml:msub><mml:mrow><mml:mo>&#x0007C;&#x0007C;</mml:mo><mml:mo>&#x00398;</mml:mo><mml:mo>&#x0007C;&#x0007C;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x00398; consists of weights and bias parameters of neural network <italic>f</italic>.</p>
</sec>
<sec>
<title>2.3.3.2. Experimental Setup</title>
<p>Each network is trained on 50 by 50 pixel image patches randomly extracted using Adam optimization with &#x003B2;<sub>1</sub> &#x0003D; 0.999, &#x003B2;<sub>2</sub> &#x0003D; 0.9, &#x003F5; &#x0003D; 1<italic>e</italic> &#x02212; 8, a batch size of 30, and learning rate of 0.001 (Kingma and Ba, <xref ref-type="bibr" rid="B8">2014</xref>). Observations covering southern Australia are used for training with northern Australia set aside for testing. By geographically dividing training and testing data, we ensure that testing images are covering a region totally unseen in the training examples. The model, implemented using TensorFlow, is trained for 300,000 iterations on one NVIDIA GeForce GTX 1080ti graphics card over approximately 7 h.</p>
</sec>
</sec>
<sec>
<title>2.3.4. Implementation Details</title>
<p>The reflectance product generated by the emulator is validated by comparison with MODIS SR (MOD09GA). Reference to a comprehensively validated SR product is a standard assessment for new SR products (Feng et al., <xref ref-type="bibr" rid="B3">2012</xref>). In addition to direct comparison with MOD09GA, performance of emulator SR retrieval is benchmarked by comparison with the MAIAC SR product, also generated from Himawari-8 TOA reflectance. This MAIAC algorithm has been calibrated using agreement with MODIS SR, and provides a comparison between the emulator and a physically based model using the same sensor.</p>
<p>We use root mean square error (RMSE) as a metric of distance between prediction and MODIS SR, and evaluate each spectral band individually. RMSE is computed on the dimensionless pixel reflectance, which takes values between 0 and 1. To further assess the goodness of fit, Pearson&#x00027;s r, and the related metric <italic>R</italic><sup>2</sup>, are statistical measures calculated to determine the amount of variation of data explained by the model. <italic>R</italic><sup>2</sup> always falls between 0 and 1, with a higher <italic>R</italic><sup>2</sup> value indicating better fit of the model to the data. Pearson&#x00027;s r and <italic>R</italic><sup>2</sup> are common metrics in the remote sensing domain to measure the coherence between images for validation purposes (Vinukollu et al., <xref ref-type="bibr" rid="B23">2011</xref>; Tang et al., <xref ref-type="bibr" rid="B20">2014</xref>). Additionally, we compute mutual information (MI) as an image matching metric. Mutual information is a dimensionless quantity that expresses how much information one random variable gives us about another. MI here is calculated with respect to the MODIS SR product.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Compared Methods and Models</title>
<p>For the prediction of surface reflectance, we compare plain CNNs and ResNets of varying depth and width. We test models grid-search style with 1&#x02013;5 residual blocks and 16&#x02013;128 hidden units. For modified ResNet, the 5 residual block architecture with 64 hidden units per layer achieves the best performance. For CNN without residual connections, a 4 layer architecture with 64 hidden units per layer performs best. We test each of these models with partial convolutions (referred as ResNet-P, CNN-P) and regular convolutions (referred as ResNet, CNN). As shown in <xref ref-type="table" rid="T2">Table 2</xref>, ResNet-P achieves the best performance among the four models, with 19% lower RMSE than that of CNN.</p>
<table-wrap position="float" id="T2">
<label>Table 2A</label>
<caption><p>Performance metrics by band and for full spectrum for ResNet, ResNet with partial convolutions (Resnet-P), CNN, CNN with partial convolutions (CNN-P), and MAIAC.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="left" rowspan="2"><bold>Model</bold></th>
<th valign="top" align="center" colspan="5" style="border-bottom: thin solid #000000;"><bold>Testing RMSE (10</bold><sup><bold><bold>&#x02212;2</bold></bold></sup><bold>)</bold></th>
<th valign="top" align="center" colspan="4" style="border-bottom: thin solid #000000;"><bold><bold><italic><bold>R</bold></italic><sup><bold>2</bold></sup></bold></bold></th>
<th valign="middle" align="center" rowspan="2"><bold>Mutual information</bold></th>
</tr>
<tr>
<th valign="top" align="center"><bold>Blue</bold></th>
<th valign="top" align="center"><bold>Green</bold></th>
<th valign="top" align="center"><bold>Red</bold></th>
<th valign="top" align="center"><bold>NIR</bold></th>
<th valign="top" align="center"><bold>Full</bold></th>
<th valign="top" align="center"><bold>Blue</bold></th>
<th valign="top" align="center"><bold>Green</bold></th>
<th valign="top" align="center"><bold>Red</bold></th>
<th valign="top" align="center"><bold>NIR</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ResNet-P</td>
<td valign="top" align="center"><bold>0.80</bold></td>
<td valign="top" align="center"><bold>1.4</bold></td>
<td valign="top" align="center">2.5</td>
<td valign="top" align="center"><bold>2.8</bold></td>
<td valign="top" align="center"><bold>1.9</bold></td>
<td valign="top" align="center">0.54</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center"><bold>0.86</bold></td>
<td valign="top" align="center"><bold>0.83</bold></td>
<td valign="top" align="center">0.94</td>
</tr>
<tr>
<td valign="top" align="left">ResNet</td>
<td valign="top" align="center">1.0</td>
<td valign="top" align="center">1.7</td>
<td valign="top" align="center"><bold>2.2</bold></td>
<td valign="top" align="center">2.9</td>
<td valign="top" align="center">2.2</td>
<td valign="top" align="center">0.46</td>
<td valign="top" align="center">0.51</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.92</td>
</tr>
<tr>
<td valign="top" align="left">CNN-P</td>
<td valign="top" align="center">1.0</td>
<td valign="top" align="center">1.9</td>
<td valign="top" align="center">2.4</td>
<td valign="top" align="center">3.0</td>
<td valign="top" align="center">2.2</td>
<td valign="top" align="center">0.54</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.92</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1.1</td>
<td valign="top" align="center">1.9</td>
<td valign="top" align="center">2.4</td>
<td valign="top" align="center">3.2</td>
<td valign="top" align="center">2.3</td>
<td valign="top" align="center"><bold>0.86</bold></td>
<td valign="top" align="center"><bold>0.68</bold></td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.88</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">MAIAC</td>
<td valign="top" align="center">1.1</td>
<td valign="top" align="center">2.1</td>
<td valign="top" align="center">3.2</td>
<td valign="top" align="center">5.6</td>
<td valign="top" align="center">3.6</td>
<td valign="top" align="center">0.39</td>
<td valign="top" align="center">0.50</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center"><bold>0.96</bold></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Best values are bolded</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float">
<label>Table 2B</label>
<caption><p>Performance metrics by band and for full spectrum for ResNet-P and MAIAC across three land cover types.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="left" rowspan="2"><bold>Model</bold></th>
<th valign="middle" align="center" rowspan="2"><bold>Land cover</bold></th>
<th valign="top" align="center" colspan="5" style="border-bottom: thin solid #000000;"><bold>Testing RMSE (10</bold><sup><bold><bold>&#x02212;2</bold></bold></sup><bold>)</bold></th>
<th valign="middle" align="center" rowspan="2"><bold>Mutual information</bold></th>
</tr>
<tr>
<th valign="top" align="center"><bold>Blue</bold></th>
<th valign="top" align="center"><bold>Green</bold></th>
<th valign="top" align="center"><bold>Red</bold></th>
<th valign="top" align="center"><bold>NIR</bold></th>
<th valign="top" align="center"><bold>Full</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="middle" align="left" rowspan="3">Emulator</td>
<td valign="top" align="left">Savanna</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">1.2</td>
<td valign="top" align="center">2.2</td>
<td valign="top" align="center">2.7</td>
<td valign="top" align="center">1.9</td>
<td valign="top" align="center">0.95</td>
</tr>
<tr>
<td valign="top" align="left">Shrubland</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">1.1</td>
<td valign="top" align="center">2.3</td>
<td valign="top" align="center">2.3</td>
<td valign="top" align="center">1.8</td>
<td valign="top" align="center">0.95</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Forest</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">1.2</td>
<td valign="top" align="center">2.7</td>
<td valign="top" align="center">4.0</td>
<td valign="top" align="center">2.5</td>
<td valign="top" align="center">1.0</td>
</tr>
<tr>
<td valign="middle" align="left" rowspan="3">MAIAC</td>
<td valign="top" align="left">Savanna</td>
<td valign="top" align="center">1.1</td>
<td valign="top" align="center">1.7</td>
<td valign="top" align="center">1.9</td>
<td valign="top" align="center">3.7</td>
<td valign="top" align="center">2.4</td>
<td valign="top" align="center">0.97</td>
</tr>
<tr>
<td valign="top" align="left">Shrubland</td>
<td valign="top" align="center">1.0</td>
<td valign="top" align="center">2.1</td>
<td valign="top" align="center">3.3</td>
<td valign="top" align="center">6.6</td>
<td valign="top" align="center">4.0</td>
<td valign="top" align="center">0.97</td>
</tr>
<tr>
<td valign="top" align="left">Forest</td>
<td valign="top" align="center">1.0</td>
<td valign="top" align="center">1.5</td>
<td valign="top" align="center">1.3</td>
<td valign="top" align="center">5.2</td>
<td valign="top" align="center">2.9</td>
<td valign="top" align="center">0.98</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We also evaluate the contribution of solar angle information to performance by training the model with and without solar angle information. We find that this additional information has a negligible impact on prediction accuracy.</p>
</sec>
<sec>
<title>3.2. Prediction of Surface Reflectance</title>
<p>Performance of the emulator is evaluated by comparison with MODIS SR, and benchmarked by comparison of MAIAC SR to MODIS SR (<xref ref-type="fig" rid="F2">Figure 2</xref>). We evaluate RMSE for each wavelength and also for the full spectrum in <xref ref-type="table" rid="T2">Table 2</xref>. This measure of distance suggests ResNet-P as the best performing model, with error on the order of 10% or less of normalized pixel values. Predictions for a representative testing set tile plotted are pixelwise against ground truth data from the MODIS product and presented in <xref ref-type="fig" rid="F2">Figure 2</xref>. <italic>R</italic><sup>2</sup> and best fit (slope and intercept) by wavelength for the testing set are presented in <xref ref-type="fig" rid="F2">Figure 2</xref>. The <italic>R</italic><sup>2</sup> values evidence high agreement between the emulator predictions and the MODIS retrievals of surface reflectance for the red and NIR bands, and lesser predictive power for the green and blue bands. Outliers are observed in all bands, particularly where MODIS reflectance exceeds the prediction by the other model. Outliers in SR are generally caused by localized light sources or reflections. Mutual information, which captures both linear and nonlinear dependence, also suggests ResNet-P as the best model among those compared.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>(A)</bold> Counterclockwise from top left: input TOA reflectance, target MODIS SR, emulator SR, MAIAC SR, visualization of difference between MODIS and two SR products. <bold>(B)</bold> Agreement between model predictions (ResNet-P, CNN-P, and MAIAC) and MODIS SR for 4 bands. Results are presented for one typical 600 &#x000D7; 600 pixel image from the testing set.</p></caption>
<graphic xlink:href="fdata-02-00042-g0002.tif"/>
</fig>
<p>In comparison with MAIAC SR, the emulator SR results in lower RMSE and better <italic>R</italic><sup>2</sup> agreement in all bands. MAIAC outperforms the emulator in MI score. This result suggests that some aspects of MODIS SR may be better captured by a deep learning model, while other aspects are better captured by the physical model.</p>
</sec>
<sec>
<title>3.3. Stability of Retrievals by Land Cover Type</title>
<p>The Australian continent is host to multiple land cover types, including savannas, shrublands, and forest, as delineated by the Collection 6 MODIS Land Cover Product (Sulla-Menashe and Friedl, <xref ref-type="bibr" rid="B19">2018</xref>). We assess the stability of SR retrieval across land cover types by presenting performance metrics for the emulator across tiles dominated by these three homogeneous land cover types: savanna, shrubland, and forest.</p>
<p>Metrics for each land cover class are presented in <xref ref-type="table" rid="T2">Table 2</xref>. The results suggest good generizability of the emulator, with comparable performance across savanna and shrubland, and poorer performance for forested area, particularly driven by poorer performance for the NIR band.</p>
</sec>
<sec>
<title>3.4. Partial Convolutions for Missing Data</title>
<p>We evaluate the performance of partial vs. regular convolutions to handle missing pixels. We find that use of partial convolutions produces a 4% reduction in RMSE. Because partial and regular convolutions perform identically for regions of valid pixels, we would expect the differential in RMSE between the two techniques to be strongly correlated to the quality of the image, i.e., the number of missing pixels.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>Herein is presented DeepEmSat, an emulator for physically-based atmospheric correction. The objective of this work is to test the hypothesis that deep learning can make a contribution to the efficient processing of reflectance observations from Earth-observing satellites. The premise examines the possibility that a sufficiently complex neural network can learn the potentially nonlinear mapping of TOA reflectance to SR. This hypothesis also examines the possibility that semantic relationships between pixels in reflectance observations can be harnessed by convolutional networks.</p>
<p>The results of this study suggest that deep learning emulators can make some contribution to efficient processing of satellite images. The evaluation metrics comprise linear measures of similarity (Euclidean distance, linear correlation) as well as a measure from probability theory (mutual information). These metrics may describe different aspects of the relationship between variables. However, it is important to recall that physically-based AC algorithms contain biases and uncertainties of their own, making comparison with existing SR products an imperfect method of validation.</p>
<p>By training and testing on separate geographic regions, we demonstrate the generalizability of the model for locations outside of the training dataset. Our assessment of emulator performance over various land types suggests also stable SR retrieval by the emulator model. We demonstrate the improvement of model accuracy with the addition of partial convolutions, although more rigorous investigation of this effect is warranted. Through comparison with MAIAC, a physically based AC algorithm, we demonstrate the relatively strong performance of the emulator in generating MODIS-like surface reflectance from GEO TOA observations.</p>
<p>Diurnal, seasonal, and annual variation in solar angle limits comparability between reflectance observations from different times. Therefore, validation of emulator retrievals is limited to the approximate time of MODIS observations. Inferences for other locations, times of day, and seasons should be interpreted with caution. Our dataset is comprised only of observations over Australia. In future work, training data could be augmented with annual observations and also with MODIS Aqua satellite, which passes daily at 1:30 p.m. local time.</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>5. Conclusion</title>
<p>Prior studies have leveraged machine learning to extract insights from complex Earth science datasets. Here, we examine the hypothesis that a deep learning emulator of a physical model can contribute to efficient satellite data processing. In this work, domain knowledge from atmospheric science is used in covariate selection and design of model architecture. Our results suggest that further work, including development of principled approaches to the blending of physical and data science methods, will be useful to extract insights from a growing volume of remotely sensed Earth science data.</p>
</sec>
<sec sec-type="data-availability-statement" id="s6">
<title>Data Availability Statement</title>
<p>The datasets generated for this study are available on request to the corresponding author.</p>
</sec>
<sec id="s7">
<title>Author Contributions</title>
<p>KD performed the experiment, analyzed the results, and led the preparation of the manuscript. TV assisted with technical aspects including implementation of the model and contributed to the problem definition and manuscript. SL prepared and provided the datasets that were used. SG, RN, and AG provided helpful guidance in conception of the project and design of the experiment.</p>
<sec>
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack><p>This work was supported in part by the Civil and Environmental Engineering Department, Sustainability and Data Sciences Laboratory, Northeastern University.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castelluccio</surname> <given-names>M.</given-names></name> <name><surname>Poggi</surname> <given-names>G.</given-names></name> <name><surname>Sansone</surname> <given-names>C.</given-names></name> <name><surname>Verdoliva</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>Land use classification in remote sensing images by convolutional neural networks</article-title>. <source>CoRR</source> abs/1508.00092.</citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cooley</surname> <given-names>T.</given-names></name> <name><surname>Anderson</surname> <given-names>G. P.</given-names></name> <name><surname>Felde</surname> <given-names>G. W.</given-names></name> <name><surname>Hoke</surname> <given-names>M. L.</given-names></name> <name><surname>Ratkowski</surname> <given-names>A. J.</given-names></name> <name><surname>Chetwynd</surname> <given-names>J. H.</given-names></name> <etal/></person-group>. (<year>2002</year>). <article-title>Flaash, a modtran4-based atmospheric correction algorithm, its application and validation</article-title>, in <source>IEEE International Geoscience and Remote Sensing Symposium</source>, <volume>Vol. 3</volume>, (<publisher-loc>Toronto, ON</publisher-loc>) <fpage>1414</fpage>&#x02013;<lpage>1418</lpage>.</citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>M.</given-names></name> <name><surname>Huang</surname> <given-names>C.</given-names></name> <name><surname>Channan</surname> <given-names>S.</given-names></name> <name><surname>Vermote</surname> <given-names>E. F.</given-names></name> <name><surname>Masek</surname> <given-names>J. G.</given-names></name> <name><surname>Townshend</surname> <given-names>J. R.</given-names></name></person-group> (<year>2012</year>). <article-title>Quality assessment of landsat surface reflectance products using modis data</article-title>. <source>Comput. Geosci.</source> <volume>38</volume>, <fpage>9</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.cageo.2011.04.011</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>F.</given-names></name> <name><surname>Masek</surname> <given-names>J.</given-names></name> <name><surname>Schwaller</surname> <given-names>M.</given-names></name> <name><surname>Hall</surname> <given-names>F.</given-names></name></person-group> (<year>2006</year>). <article-title>On the blending of the landsat and modis surface reflectance: predicting daily landsat surface reflectance</article-title>. <source>IEEE Trans. Geosci. Remote Sens.</source> <volume>44</volume>, <fpage>2207</fpage>&#x02013;<lpage>2218</lpage>. <pub-id pub-id-type="doi">10.1109/TGRS.2006.872081</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gatys</surname> <given-names>L. A.</given-names></name> <name><surname>Ecker</surname> <given-names>A. S.</given-names></name> <name><surname>Bethge</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Image style transfer using convolutional neural networks</article-title>, in <source>The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source> (<publisher-loc>Las Vegas, NV</publisher-loc>). <pub-id pub-id-type="doi">10.1109/CVPR.2016.265</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Deep residual learning for image recognition</article-title>. <source>CoRR</source> abs/1512.03385. <pub-id pub-id-type="doi">10.1109/CVPR.2016.90</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><collab>Japan Meteorological Agency</collab></person-group> (<year>2015</year>). <article-title>Himawari-8/9 himawari standard data user&#x00027;s guide: Version 1.2. 1</article-title>.</citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kingma</surname> <given-names>D. P.</given-names></name> <name><surname>Ba</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>Adam: a method for stochastic optimization</article-title>. <source>CoRR</source> abs/1412.6980.</citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lary</surname> <given-names>D. J.</given-names></name> <name><surname>Remer</surname> <given-names>L. A.</given-names></name> <name><surname>MacNeill</surname> <given-names>D.</given-names></name> <name><surname>Roscoe</surname> <given-names>B.</given-names></name> <name><surname>Paradise</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Machine learning and bias correction of modis aerosol optical depth</article-title>. <source>IEEE Geosci. Remote Sens. Lett.</source> <volume>6</volume>, <fpage>694</fpage>&#x02013;<lpage>698</lpage>. <pub-id pub-id-type="doi">10.1109/LGRS.2009.2023605</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liang</surname> <given-names>S.</given-names></name> <name><surname>Fang</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>M.</given-names></name> <name><surname>Shuey</surname> <given-names>C. J.</given-names></name> <name><surname>Walthall</surname> <given-names>C.</given-names></name> <name><surname>Daughtry</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2002</year>). <article-title>Validating modis land surface reflectance and albedo products: methods and preliminary results</article-title>. <source>Remote Sens. Environ.</source> <volume>83</volume>, <fpage>149</fpage>&#x02013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1016/S0034-4257(02)00092-5</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>Reda</surname> <given-names>F. A.</given-names></name> <name><surname>Shih</surname> <given-names>K. J.</given-names></name> <name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Tao</surname> <given-names>A.</given-names></name> <name><surname>Catanzaro</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Image inpainting for irregular holes using partial convolutions</article-title>. <source>CoRR</source> abs/1804.07723. <pub-id pub-id-type="doi">10.1007/978-3-030-01252-6_6</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Long</surname> <given-names>Y.</given-names></name> <name><surname>Gong</surname> <given-names>Y.</given-names></name> <name><surname>Xiao</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name></person-group> (<year>2017</year>). <article-title>Accurate object localization in remote sensing images based on convolutional neural networks</article-title>. <source>IEEE Trans. Geosci. Remote Sens.</source> <volume>55</volume>, <fpage>2486</fpage>&#x02013;<lpage>2498</lpage>. <pub-id pub-id-type="doi">10.1109/TGRS.2016.2645610</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyapustin</surname> <given-names>A.</given-names></name> <name><surname>Martonchik</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Laszlo</surname> <given-names>I.</given-names></name> <name><surname>Korkin</surname> <given-names>S.</given-names></name></person-group> (<year>2011a</year>). <article-title>Multiangle implementation of atmospheric correction (maiac): 1. radiative transfer basis and look-up tables</article-title>. <source>J. Geophys. Res.</source> <fpage>116</fpage>. <pub-id pub-id-type="doi">10.1029/2010JD014985</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyapustin</surname> <given-names>A.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Laszlo</surname> <given-names>I.</given-names></name> <name><surname>Kahn</surname> <given-names>R.</given-names></name> <name><surname>Korkin</surname> <given-names>S.</given-names></name> <name><surname>Remer</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2011b</year>). <article-title>Multiangle implementation of atmospheric correction (maiac): 2. Aerosol algorithm</article-title>. <source>J. Geophys. Res.</source> 116. <pub-id pub-id-type="doi">10.1029/2010JD014986</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyapustin</surname> <given-names>A. I.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Laszlo</surname> <given-names>I.</given-names></name> <name><surname>Hilker</surname> <given-names>T.</given-names></name> <name><surname>Hall</surname> <given-names>F. G.</given-names></name> <name><surname>Sellers</surname> <given-names>P. J.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Multi-angle implementation of atmospheric correction for modis (maiac): 3. Atmospheric correction</article-title>. <source>Remote Sens. Environ.</source> <volume>127</volume>, <fpage>385</fpage>&#x02013;<lpage>393</lpage>. <pub-id pub-id-type="doi">10.1016/j.rse.2012.09.002</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mou</surname> <given-names>L.</given-names></name> <name><surname>Ghamisi</surname> <given-names>P.</given-names></name> <name><surname>Zhu</surname> <given-names>X. X.</given-names></name></person-group> (<year>2017</year>). <article-title>Deep recurrent neural networks for hyperspectral image classification</article-title>. <source>IEEE Trans. Geosci. Remote Sens.</source> <volume>55</volume>, <fpage>3639</fpage>&#x02013;<lpage>3655</lpage>. <pub-id pub-id-type="doi">10.1109/TGRS.2016.2636241</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Overpeck</surname> <given-names>J. T.</given-names></name> <name><surname>Meehl</surname> <given-names>G. A.</given-names></name> <name><surname>Bony</surname> <given-names>S.</given-names></name> <name><surname>Easterling</surname> <given-names>D. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Climate data challenges in the 21st century</article-title>. <source>Science</source> <volume>331</volume>, <fpage>700</fpage>&#x02013;<lpage>702</lpage>. <pub-id pub-id-type="doi">10.1126/science.1197869</pub-id><pub-id pub-id-type="pmid">21311006</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reichstein</surname> <given-names>M.</given-names></name> <name><surname>Camps-Valls</surname> <given-names>G.</given-names></name> <name><surname>Stevens</surname> <given-names>B.</given-names></name> <name><surname>Jung</surname> <given-names>M.</given-names></name> <name><surname>Denzler</surname> <given-names>J.</given-names></name> <name><surname>Carvalhais</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Deep learning and process understanding for data-driven earth system science</article-title>. <source>Nature</source> <volume>566</volume>:<fpage>195</fpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-0912-1</pub-id><pub-id pub-id-type="pmid">30760912</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sulla-Menashe</surname> <given-names>D.</given-names></name> <name><surname>Friedl</surname> <given-names>M. A.</given-names></name></person-group> (<year>2018</year>). <source>User Guide to Collection 6 Modis Land Cover (mcd12q1 and mcd12c1) Product</source>. <publisher-loc>Reston, VA</publisher-loc>: <publisher-name>USGS</publisher-name>.</citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>H.</given-names></name> <name><surname>Brolly</surname> <given-names>M.</given-names></name> <name><surname>Zhao</surname> <given-names>F.</given-names></name> <name><surname>Strahler</surname> <given-names>A. H.</given-names></name> <name><surname>Schaaf</surname> <given-names>C. L.</given-names></name> <name><surname>Ganguly</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Deriving and validating leaf area index (LAI) at multiple spatial scales through lidar remote sensing: a case study in sierra national forest, CA</article-title>. <source>Remote Sens. Environ.</source> <volume>143</volume>, <fpage>131</fpage>&#x02013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1016/j.rse.2013.12.007</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Vandal</surname> <given-names>T.</given-names></name> <name><surname>Kodra</surname> <given-names>E.</given-names></name> <name><surname>Ganguly</surname> <given-names>S.</given-names></name> <name><surname>Michaelis</surname> <given-names>A.</given-names></name> <name><surname>Nemani</surname> <given-names>R.</given-names></name> <name><surname>Ganguly</surname> <given-names>A. R.</given-names></name></person-group> (<year>2017</year>). <article-title>DeepSD: generating high resolution climate change projections through single image super-resolution</article-title>, in <source>Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1663</fpage>&#x02013;<lpage>1672</lpage>. <pub-id pub-id-type="doi">10.1145/3097983.3098004</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vermote</surname> <given-names>E. F.</given-names></name> <name><surname>Kotchenova</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Atmospheric correction for the monitoring of land surfaces</article-title>. <source>J. Geophys. Res.</source> 113. <pub-id pub-id-type="doi">10.1029/2007JD009662</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vinukollu</surname> <given-names>R. K.</given-names></name> <name><surname>Wood</surname> <given-names>E. F.</given-names></name> <name><surname>Ferguson</surname> <given-names>C. R.</given-names></name> <name><surname>Fisher</surname> <given-names>J. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Global estimates of evapotranspiration for climate studies using multi-sensor remote sensing data: evaluation of three process-based approaches</article-title>. <source>Remote Sens. Environ.</source> <volume>115</volume>, <fpage>801</fpage>&#x02013;<lpage>823</lpage>. <pub-id pub-id-type="doi">10.1016/j.rse.2010.11.006</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>M.</given-names></name> <name><surname>Jean</surname> <given-names>N.</given-names></name> <name><surname>Burke</surname> <given-names>M.</given-names></name> <name><surname>Lobell</surname> <given-names>D.</given-names></name> <name><surname>Ermon</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Transfer learning from deep features for remote sensing and poverty mapping</article-title>, in <source>Thirtieth AAAI Conference on Artificial Intelligence</source> (<publisher-loc>Phoenix, AZ</publisher-loc>).</citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>S.</given-names></name> <name><surname>Lei</surname> <given-names>B.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Retrieval of hyperspectral surface reflectance based on machine learning</article-title>. <source>Remote Sens.</source> <volume>10</volume>(<issue>2</issue>):<fpage>323</fpage>. <pub-id pub-id-type="doi">10.3390/rs10020323</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work was supported by two National Science Foundation projects including NSF BIG DATA under Grant 1447587 and NSF CyberSEES under Grant 1442728. This work was also supported by NASA Ames Research Center and Bay Area Environmental Research Institute.</p>
</fn>
</fn-group>
</back>
</article>
