<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="review-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Mater.</journal-id>
<journal-title>Frontiers in Materials</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Mater.</abbrev-journal-title>
<issn pub-type="epub">2296-8016</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">791296</article-id>
<article-id pub-id-type="doi">10.3389/fmats.2021.791296</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Materials</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Deep Learning for Photonic Design and Analysis: Principles and Applications</article-title>
<alt-title alt-title-type="left-running-head">Duan et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Deep Learning for Photonics</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Duan</surname>
<given-names>Bing</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="fn" rid="FN1">
<sup>&#x2020;</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1509425/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wu</surname>
<given-names>Bei</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="fn" rid="FN1">
<sup>&#x2020;</sup>
</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Chen</surname>
<given-names>Jin-hui</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1275252/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chen</surname>
<given-names>Huanyang</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1242333/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Yang</surname>
<given-names>Da-Quan</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>State Key Laboratory of Information Photonics and Optical Communications, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications</institution>, <addr-line>Beijing</addr-line>, <country>China</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Institute of Electromagnetics and Acoustics and Fujian Provincial Key Laboratory of Electromagnetic Wave Science and Detection Technology, Xiamen University</institution>, <addr-line>Xiamen</addr-line>, <country>China</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Shenzhen Research Institute of Xiamen University</institution>, <addr-line>Shenzhen</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1066806/overview">Lingling Huang</ext-link>, Beijing Institute of Technology, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1245527/overview">Ying Li</ext-link>, Zhejiang University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1052039/overview">Yuanmu Yang</ext-link>, Tsinghua University, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Da-Quan Yang, <email>ydq@bupt.edu.cn</email>; Jin-hui Chen, <email>jimchen@xmu.edu.cn</email>
</corresp>
<fn fn-type="equal" id="FN1">
<label>
<sup>&#x2020;</sup>
</label>
<p>These authors have contributed equally to this&#x20;work</p>
</fn>
<fn fn-type="other">
<p>This article was submitted to Metamaterials, a section of the journal Frontiers in Materials</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>01</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>8</volume>
<elocation-id>791296</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>12</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Duan, Wu, Chen, Chen and Yang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Duan, Wu, Chen, Chen and Yang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Innovative techniques play important roles in photonic structure design and complex optical data analysis. As a branch of machine learning, deep learning can automatically reveal the inherent connections behind the data by using hierarchically structured layers, which has found broad applications in photonics. In this paper, we review the recent advances of deep learning for the photonic structure design and optical data analysis, which is based on the two major learning paradigms of supervised learning and unsupervised learning. In addition, the optical neural networks with high parallelism and low energy consuming are also highlighted as novel computing architectures. The challenges and perspectives of this flourishing research field are discussed.</p>
</abstract>
<kwd-group>
<kwd>optics and photonics</kwd>
<kwd>deep learning</kwd>
<kwd>photonic structure design</kwd>
<kwd>optical data analysis</kwd>
<kwd>optical neural networks</kwd>
</kwd-group>
<contract-num rid="cn001">11974058</contract-num>
<contract-num rid="cn002">20720&#x2009;200&#x2009;074 20720&#x2009;210&#x2009;045</contract-num>
<contract-sponsor id="cn001">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content>
</contract-sponsor>
<contract-sponsor id="cn002">Fundamental Research Funds for the Central Universities<named-content content-type="fundref-id">10.13039/501100012226</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Over the past few decades, photonics, as an important field of fundamental research, has been penetrating into various domains, such as life science and information technology (<xref ref-type="bibr" rid="B75">Vukusic and Sambles, 2003</xref>; <xref ref-type="bibr" rid="B5">Bigio and Sergio, 2016</xref>; <xref ref-type="bibr" rid="B60">Rav&#xec; et&#x20;al., 2016</xref>). In particular, the advances of photonic devices, optical imaging and spectroscopy techniques have further accelerated the wide applications of photonics (<xref ref-type="bibr" rid="B73">T&#xf6;r&#xf6;k and Kao, 2007</xref>; <xref ref-type="bibr" rid="B52">Ntziachristos, 2010</xref>; <xref ref-type="bibr" rid="B11">Dong et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B28">Jiang et&#x20;al., 2021</xref>). For example, the creation of metasurfaces/metamaterials have promoted the development of holography and superlenses (<xref ref-type="bibr" rid="B89">Zhang and Liu, 2008</xref>; <xref ref-type="bibr" rid="B85">Yoon et&#x20;al., 2018</xref>), while the optical spectroscopy and imaging have deep utility in medical diagnosis (<xref ref-type="bibr" rid="B8">Chan and Siegel, 2019</xref>; <xref ref-type="bibr" rid="B42">Lundervoldab and Lundervoldacd, 2019</xref>) and biological study (<xref ref-type="bibr" rid="B73">T&#xf6;r&#xf6;k and Kao, 2007</xref>). However, for sophisticated photonic devices, the initial design relies on the electromagnetic modelling, which is largely determined by human experience gained from the physical intuition and previous (<xref ref-type="bibr" rid="B47">Ma W. et&#x20;al., 2021</xref>). The specific structure parameters are determined by means of trial-and-error, and their parametric space is limited by simulation power and time. Besides, the optical data generated from optical measurements are becoming more and more complicated. For instance, when applying optical spectroscopy to characterize various analytes (e.g., malignant tumor tissue and bacterial pathogens) in complex biological environments, it is challenging to extract the fingerprint due to the large spectral overlap from the common bonds in the analytes (<xref ref-type="bibr" rid="B62">Rickard et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B14">Fang et&#x20;al., 2021</xref>). The traditional analysis methods are mainly based on the physical intuition and prior-experiences, which are time-consuming and susceptive to human&#x20;error.</p>
<p>Recently, the booming field of artificial intelligence has accelerated the pace of technological progresses (<xref ref-type="bibr" rid="B18">Goodfellow et&#x20;al., 2016</xref>). Particularly, deep learning, as a data-driven method, can automatically reveal the inherent connections behind the data by using hierarchically structured layers. It has been widely exploited in the field of computer visions (<xref ref-type="bibr" rid="B43">Luongo et&#x20;al., 2021</xref>), image analysis (<xref ref-type="bibr" rid="B4">Barbastathis et&#x20;al., 2019</xref>), robotic controls (<xref ref-type="bibr" rid="B1">Abbeel et&#x20;al., 2010</xref>), driverless cars (<xref ref-type="bibr" rid="B30">Karmakar et&#x20;al., 2021</xref>) and language translations (<xref ref-type="bibr" rid="B82">Wu et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B55">Popel et&#x20;al., 2020</xref>). In the photonics applications, deep learning provides a new perspective for device design and optical data analysis (<xref ref-type="bibr" rid="B2">Anjit et&#x20;al., 2021</xref>). It is capable of searching the nonlinear physical correlations, such as the relationship between photonic structures and their electromagnetic response (<xref ref-type="bibr" rid="B80">Wiecha and Muskens, 2019</xref>; <xref ref-type="bibr" rid="B35">Li et&#x20;al., 2020</xref>). The cross-discipline of deep learning and photonics enables researchers to design photonic devices and decode optical data without explicitly modeling the underlying physical processes or manually manipulating the models (<xref ref-type="bibr" rid="B9">Chen et&#x20;al., 2020</xref>). Particular areas of success include the materials and structures design (<xref ref-type="bibr" rid="B48">Malkiel et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B46">Ma et&#x20;al., 2019</xref>), optical spectroscopy and image analysis (<xref ref-type="bibr" rid="B16">Ghosh et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B50">Moen et&#x20;al., 2019</xref>), data storage (<xref ref-type="bibr" rid="B63">Rivenson et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B37">Liao et&#x20;al., 2019</xref>), and optical communications (<xref ref-type="bibr" rid="B31">Khan et&#x20;al., 2019</xref>), as shown in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>. The deep neural networks used for these applications are mainly tested and trained in electronic computing systems. Compared with the conventional electronic platforms, the photonic systems have attracted increasing attention due to the low energy consuming, multiple interconnections and high parallelism (<xref ref-type="bibr" rid="B71">Sui et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B17">Goi et&#x20;al., 2021</xref>). Recently, various optical neural network (ONN) architectures have been used for high-speed data analysis, such as optical interferometric neural network (<xref ref-type="bibr" rid="B66">Shen et&#x20;al., 2017</xref>) and diffractive optical neural network (<xref ref-type="bibr" rid="B38">Lin et&#x20;al., 2018</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Applications of the deep learning in optical structure design and data analysis.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g001.tif"/>
</fig>
<p>In this article, we focus on the merging of photonics and deep learning for the optical structures design and data analysis. In <xref ref-type="sec" rid="s2">Section 2</xref>, we introduce the typical deep learning algorithms, including supervised learning and unsupervised learning. In <xref ref-type="sec" rid="s3">Section 3</xref>, we present the deep learning-assisted photonic structure design and optical data analysis. The optical neural networks are highlighted as novel computing architectures in <xref ref-type="sec" rid="s4">Section 4</xref>. In <xref ref-type="sec" rid="s5">Section 5</xref>, we discuss the outlook of this flourishing field accompanied with a short conclusion.</p>
</sec>
<sec id="s2">
<title>2 Principles of Typical Neural Networks</title>
<p>In this section, we will introduce several typical deep learning algorithms, and elucidate their working principles for the cross-discipline optical applications. Basically, the algorithms can be divided into supervised learning and unsupervised learning.</p>
<p>In supervised learning, the input training data are accompanied with &#x201c;correct answer&#x201d; labels. During the training process, supervised learning compares the predicted results with the ground-true labels in the datasets, and constantly optimizes the network to achieve desired performance. Specifically, it can learn the correlations between photonic structures and optical properties, so as to perform special optical functions. Supervised learning includes multiple artificial neural networks (<xref ref-type="bibr" rid="B33">LeCun et&#x20;al., 2015</xref>), such as multilayer perceptron (MLP), convolutional neural networks (CNNs) and recurrent neural networks (RNNs), as shown in <xref ref-type="fig" rid="F2">Figures 2A&#x2013;C</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Schematic illustration of typical deep learning models. (A) Multilayer perceptron, (B) Convolutional neural networks, (C) Recurrent neural networks, (D) Generative adversarial network.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g002.tif"/>
</fig>
<sec id="s2-1">
<title>2.1 Multilayer Perceptron</title>
<p>MLP is a fundamental model from which all other artificial neural networks are developed, so it is usually considered as the beginning of deep learning. MLP is composed of a series of hidden layers, which are the link between the inputs and outputs. The neurons in the upper and lower layers are connected to each other through a nonlinear activation function. This model determines a large number of optimizable parameters, which provides high capability to learn the complex and nonlinear relationships in the optical data. In a typical MLP training process, we need to pre-define a cost function by the variance or cross entropy between the predicted and actual values. During the optimization, the weights of the neurons are adjusted by the back propagation algorithm to minimize the cost function. Later, the target optical functions such as the scattering spectra are imported into the network, and the predicted photonic structures are obtained (<xref ref-type="bibr" rid="B81">Wu et&#x20;al., 2021</xref>). Intuitively, with the increase of hidden layers in MLP, the neural network learns more features and realizes higher training accuracy at the cost of training time. Noting that too many hidden layers are prone to cause over-fitting results, thus the appropriate hidden-layer numbers are preferred. To solve the universally non-uniqueness problems, the tandem network model is proposed by cascading an inverse-design network with a forward-modeling network (<xref ref-type="bibr" rid="B39">Liu et&#x20;al., 2018</xref>).</p>
</sec>
<sec id="s2-2">
<title>2.2 Convolutional Neural Networks</title>
<p>CNNs are specially designed for image classification (<xref ref-type="bibr" rid="B36">Li et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B20">Guo et&#x20;al., 2017</xref>) and recognition (<xref ref-type="bibr" rid="B23">Hijazi et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B15">Fu et&#x20;al., 2017</xref>), and their performance on special tasks such as image recognition can even surpass humans (<xref ref-type="bibr" rid="B42">Lundervoldab and Lundervoldacd, 2019</xref>). The reason why CNNs can effectively process high-dimensional data such as images, is that they can automatically learn the features from large-scale data and generalize them to the same type of unknown data. Generally, CNNs consist of four parts: 1) The convolution layers extract the features of the input images; 2) The activation layers realize nonlinear mapping; 3) The pooling layers aggregate features in different regions to reduce the data dimension; 4) The full connection layer outputs the final classification results. The convolution layers usually contain several convolution kernels, which are also known as filters, and they sequentially extract the features of the input image just like the human brain. In the past years, various derivative networks, such as LeNet (<xref ref-type="bibr" rid="B34">Lecun et&#x20;al., 1998</xref>), AlexNet (<xref ref-type="bibr" rid="B32">Krizhevsky et&#x20;al., 2012</xref>), ZFNet (<xref ref-type="bibr" rid="B87">Zeiler and Fergus, 2014</xref>), VGG (<xref ref-type="bibr" rid="B67">Simonyan and Zisserman, 2014</xref>), GoogleNet (<xref ref-type="bibr" rid="B72">Szegedy et&#x20;al., 2015</xref>), ResNet (<xref ref-type="bibr" rid="B22">He et&#x20;al., 2016</xref>) and SENet (<xref ref-type="bibr" rid="B25">Hu et&#x20;al., 2018</xref>) are developed on the basic components of CNNs. The network accuracy is improved by manipulating the layer numbers and connection modes. CNNs exhibit two important characteristics: First, the neurons in the neighboring layers are connected locally, which is different from the fully connected neurons in MLP. Second, the weight array in a region is shared to reduce the number of parameters, and it accelerates the convergence of network. Since the complexity of the model is reduced, the over-fitting problem can be released. Theoretically, CNNs are prominent to solve problems relevant to the images, such as optical illusion custom and super-resolution imaging. The network can automatically extract image features, including color, texture, shape and topology, which increases the robustness and operation efficiency in image processing. Recently, CNNs have been applied in photonic crystal (<xref ref-type="bibr" rid="B3">Asano and Noda, 2018</xref>) design. By optimizing the positions of air holes in a base nanocavity with CNNs, the extremely high Q-factor of 1.58 &#xd7; 10<sup>9</sup> was successfully obtained.</p>
</sec>
<sec id="s2-3">
<title>2.3 Recurrent Neural Networks</title>
<p>Just like human beings can better understand the world by virtue of their memory effects, RNNs have certain memory for the past processed information. The output of RNNs is related not only to the current input, but also to the previous inputs. Thus, RNNs are prevalently used to simulate continuous sequential optical signal in the time domain. Since the networks memorize all information in the same way, they usually occupy a lot of memory and reduce the computational efficiency. The long short-term memory network, as a derivative RNNs, can selectively memorize the important information and forget the unimportant information by controlling the gate states (<xref ref-type="bibr" rid="B24">Ochreiter and Schmidhuber, 1997</xref>). Moreover, it solves the problem of gradient disappearance and gradient explosion for the long sequence training.</p>
<p>Unsupervised learning is fed with unlabeled training data, which denotes having no standard answer in the training process. Consequently, unsupervised systems are capable of discovering new patterns in the training datasets, some of which can even go beyond prior knowledge and scientific intuition. Moreover, the unsupervised learning focuses on extracting important features from data, rather than directly predicting the optical response, thus it does not need massive data to train the network. In this way, it removes the burden of creating massive labeled&#x20;data.</p>
</sec>
<sec id="s2-4">
<title>2.4 Generative Adversarial Network</title>
<p>GAN is proposed by <xref ref-type="bibr" rid="B19">Goodfellow et&#x20;al. (2014)</xref> to solve unsupervised learning problems. It contains two independent networks as shown in <xref ref-type="fig" rid="F2">Figure&#x20;2D</xref>, which fight against each other to complete a zero-sum game. The discriminator network distinguishes whether the input structure data is real or fake. The generator network generates fake structure data by selecting and combining elements in the latent space with superposed noise. In the training process, the discriminator receives data from both the real and fake structure data, and judges which category it belongs to. Specifically, if the discriminator is right, adjust the generator to make the fake structure data more real to deceive the discriminator; otherwise, adjust the discriminator to avoid making similar mistakes again. The continuous training will reach a balanced state, and a generator with high quality and discriminator with strong judgment ability is achieved. After training, the generator is capable of producing target photonic structures quickly, and the discriminator can accurately judge whether a new input structure matches the target optical response or&#x20;not.</p>
<p>The typical characteristics of deep learning algorithms including MLP, CNNs, RNNs and generative model are summarized in <xref ref-type="table" rid="T1">Table&#x20;1</xref>. Note that MLP and CNNs have been widely used in the photonic devices design and optical data analysis. Further research of RNNs and generation models for photonics applications needs to be explored.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Comparison of deep learning algorithms.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Algorithms</th>
<th align="center">Unique features</th>
<th align="center">Advantages</th>
<th align="center">Disadvantages</th>
<th align="center">Optical applications</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Multilayer perceptron</td>
<td align="left">Full connected neurons, simple structure</td>
<td align="left">High reliability, low latency</td>
<td align="left">Difficult to handle high dimensional data</td>
<td align="left">Nanoparticle simulation<sup>(<xref ref-type="bibr" rid="B53">Peurifoy etal., 2018</xref>)</sup>, self-adaptive invisibility cloak<sup>(<xref ref-type="bibr" rid="B57">Qian et&#x20;al., 2020</xref>)</sup>, 3D vectorial holography<sup>(<xref ref-type="bibr" rid="B61">Ren et&#x20;al., 2020</xref>)</sup>
</td>
</tr>
<tr>
<td align="left">Convolutional neural networks</td>
<td align="left">Local receptive fields, shared weights</td>
<td align="left">High dimensional data processing</td>
<td align="left">Ignore global and local correlations</td>
<td align="left">Spectra analysis<sup>(<xref ref-type="bibr" rid="B13">Fan et&#x20;al., 2019</xref>)</sup>, optical communications<sup>(<xref ref-type="bibr" rid="B12">Fan et&#x20;al., 2020</xref>)</sup>, data storage<sup>(<xref ref-type="bibr" rid="B79">Wiecha e tal., 2019</xref>)</sup>, optical image processing<sup>(<xref ref-type="bibr" rid="B6">Buggenthin et&#x20;al., 2017</xref>)</sup>
</td>
</tr>
<tr>
<td align="left">Recurrent neural networks</td>
<td align="left">Intra-layer neurons connected, shared parameters at different cycles</td>
<td align="left">Memorable, sequential information processing</td>
<td align="left">Long-term dependencies, gradient disappearance</td>
<td align="left">Optical character recognition<sup>(<xref ref-type="bibr" rid="B68">Singh, 2013</xref>)</sup>, transient electromagnetic modeling<sup>(<xref ref-type="bibr" rid="B65">Sharmaand and Zhang, 2005)</xref>
</sup>
</td>
</tr>
<tr>
<td align="left">Generative model</td>
<td align="left">With two different networks, gradient updated from discriminator rather than training data</td>
<td align="left">Fast convergence speed, incomplete datasets processing</td>
<td align="left">Unsuitable for discrete data, error-prone</td>
<td align="left">Metallic metamolecules design<sup>(<xref ref-type="bibr" rid="B41">Liu et&#x20;al., 2020</xref>)</sup>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In the inverse design of photonic devices, there have been various optimization algorithms including the classic machine learning approaches (e.g., regularization algorithms, ensemble algorithms, or decision tree algorithms) and the traditional optimization approaches (e.g. topology optimization, adjoint methods or genetic algorithms) to efficiently search the target in large design space. By involving more data, deep learning can usually improve the computing accuracy efficiently, but this method has almost no effect on the conventional machine learning approaches. Moreover, the transfer learning technology enables the well-trained deep learning models to be applied to other scenarios, making it adaptable and easy to transform. In contrast, machine learning can only be applied to a single scene and is weak in transportability. Traditional optimization approaches search the maximal solution iteratively, which modify the searching strategy according to the intermediate results. This strategy consumes huge computational resources and is difficult to be applied for complex designs. People interested in these algorithms can refer to the recent review for more information (<xref ref-type="bibr" rid="B44">Ma L. et&#x20;al., 2021</xref>).</p>
</sec>
</sec>
<sec id="s3">
<title>3 Photonic Applications of Deep Learning</title>
<p>In this section, we briefly introduce the deep learning-based applications from photonic structure design to data analysis.</p>
<sec id="s3-1">
<title>3.1 Deep Learning for Photonic Structure Design</title>
<p>In the past decades, the photonics have developed rapidly, and show a strong capability in tailoring light-matter interactions. Recently, this field has been revolutionized by the data-driven deep learning method. The method can search for the intricate relationship between the photonic structures and the optical responses after training on large samples, which circumvents the time-consuming iterative numerical simulations in photonic structure designs. Moreover, unlike traditional optimization algorithms, which requires repeated iterative training for each computation, data collection and network training for deep learning are only one-time costs. Such data-driven model can serve as a powerful tool for the on-demand design of photonic devices.</p>
<sec id="s3-1-1">
<title>3.1.1 Inverse Design of Optical Nanoparticles</title>
<p>Core-shell nanoparticles can exhibit intriguing phenomena, such as multifrequency superscattering (<xref ref-type="bibr" rid="B58">Qin et&#x20;al., 2021</xref>), directional scattering and Fano-like resonance, but its higher degree-of-freedom makes designing difficult. <xref ref-type="bibr" rid="B53">Peurifoy et&#x20;al. (2018)</xref> applied MLP to predict the scattering cross-section of a nanoparticle with silicon dioxide/titanium dioxide multilayered structures, as shown in <xref ref-type="fig" rid="F3">Figure&#x20;3A</xref>. In this work, MLP was trained on 50,000 scattering cross-section spectra obtained by the transfer matrix method. They achieved dual functions of forward modeling and inverse design. Specifically, MLP was used to approximate the scattering cross section of the core-shell nanoparticle for the input layer parameters. Besides, with the target scattering spectra, MLP would expeditiously output the corresponding structural parameters of the nanoparticle. The results show that MLP is able to calculate spectra accurately even the input structure goes beyond the training data. It suggests that MLP is not just simply fitting the data, but instead discovering some underlying patterns and structures of the input and output data. Note that this model architecture can not achieve the inverse design of materials, and there are certain restrictions on design freedom. <xref ref-type="bibr" rid="B70">So et&#x20;al. (2019)</xref> took a step forward and inversely designed optical material and structural thickness simultaneously by realizing the classification and regression at the same time, as shown in <xref ref-type="fig" rid="F3">Figure&#x20;3B</xref>. They used classification to determine what materials were used for each layer, and regression to predict the thickness. The loss function was defined as weighted average of spectrum and design losses. The spectrum loss was calculated by mean squared error of target spectra and predicted response by deep learning, and the design loss was weighted average of material and structural losses. As a result, the material and thickness of the core-shell nanoparticle are designed simultaneously and accurately.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Photonic designs enabled by deep learning models. <bold>(A)</bold> Nanophotonic particle scattering simulation. Reproduced from <xref ref-type="bibr" rid="B53">Peurifoy et&#x20;al. (2018)</xref> with permission from American Association for the Advancement of Science. <bold>(B)</bold> Simultaneous design of material and structure of nanosphere particles. Reproduced from <xref ref-type="bibr" rid="B70">So et&#x20;al. (2019)</xref> with permission from American Chemical Society. <bold>(C)</bold> Inverse design of metallic metamolecules. Reproduced from <xref ref-type="bibr" rid="B41">Liu et&#x20;al. (2020)</xref> with permission from the WILEY-VCH Verlag GmbH &#x26; Co. KGaA, Weinheim. <bold>(D)</bold> Self-adaptive invisibility cloak. Reproduced from <xref ref-type="bibr" rid="B57">Qian et&#x20;al. (2020)</xref> with permission from Springer Nature. <bold>(E)</bold> Optical vectorial hologram design of a 3D-kangaroo projection. Reproduced from <xref ref-type="bibr" rid="B61">Ren et&#x20;al. (2020)</xref> with permission from American Association for the Advancement of Science.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g003.tif"/>
</fig>
</sec>
<sec id="s3-1-2">
<title>3.1.2 Inverse Design of Metasurface</title>
<p>Over the past 2&#xa0;decades, the explorations of metasurfaces have led to the discovery of exotic light&#x2013;matter interactions, such as anomalous deflection (<xref ref-type="bibr" rid="B86">Yu and Capasso, 2014</xref>; <xref ref-type="bibr" rid="B77">Wang et&#x20;al., 2018</xref>), asymmetric polarization conversion (<xref ref-type="bibr" rid="B64">Schwanecke et&#x20;al., 2008</xref>; <xref ref-type="bibr" rid="B54">Pfeiffer et&#x20;al., 2014</xref>) and wave-front shaping (<xref ref-type="bibr" rid="B56">Pu et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B88">Zhang et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B59">Raeker and Grbic, 2019</xref>).</p>
<p>From individual nanoparticles to collective meta-atoms metasurfaces, the structural degree of freedom and flexibility are increased drastically. <xref ref-type="bibr" rid="B41">Liu et&#x20;al. (2020)</xref> proposed a hybrid framework, i.e. compositional pattern-producing networks (CPPN) and cooperative coevolution (CC) algorithm, to design metamolecules with significantly increased training efficiency, as shown in <xref ref-type="fig" rid="F3">Figure&#x20;3C</xref>. The CPPN as a generative network composes high-quality nanostructure patterns, and CC divides the target metamolecule into the independent meta-atoms. The metallic metamolecules for arbitrary manipulation of the polarization and wavefront of light were demonstrated in the hybrid framework. This work provides a promising way to automatically construct the large-scale metasurfaces with high efficiency. Note that the proposed framework is assumed with weak-coupled meta-atoms, the strong coupling and nonlinear optical effects are expected to be involved for the future development. The nature of three-dimensional (3D) vector optical field is crucial to understand the light-matter interaction, which plays a significant role in imaging, holographic optical trapping and high-capacity data storage. Hence, using deep learning to manipulate the complex 3D vector optical fields in photonic structures such as spin and orbital momentum, topology and anisotropic vector fields are ready to be explored. For instance, <xref ref-type="bibr" rid="B61">Ren et&#x20;al. (2020)</xref> designed an optical vectorial hologram of a 3D-kangaroo projection by MLP, as shown in <xref ref-type="fig" rid="F3">Figure&#x20;3E</xref>. The phase hologram and a 2D vector-field distribution were served as state vector and label vector, respectively, and they were used to train the network model to reconstruct a stereo optical image. This work achieves the lensless reconstruction of a 3D-image with an ultra-wide viewing angle of 94&#xb0;and a high diffraction efficiency of 78%, which shows great potentials in multiplexed displays and encryption.</p>
<p>Following the pioneering works on the static manipulation of optical field, there is an increasing interest to dynamically manipulate the optical filed, such as the design of invisibility cloak. The invisibility cloak is an intriguing device with great applications in various fields, however, the conventional cloak could not fit into the ever-changing environment. <xref ref-type="bibr" rid="B57">Qian et&#x20;al. (2020)</xref> used MLP to design a self-adaptive cloak with millisecond response time to the dynamic incident wave and surrounding environment, as shown in <xref ref-type="fig" rid="F3">Figure&#x20;3D</xref>. To this end, the optical response of each element inside the metasurface was independently tuned by feeding different bias voltages across a loaded varactor diode. With deep learning, the integrated system could exploit the intricate relationship between incident waves, reflection spectra and bias voltages for each individual meta-atom. Thereafter, the proposed intelligent cloak with bandwidth of 6.7&#x2013;9.2&#xa0;GHz was realized. The concept of demonstration can be potentially extended to the visible spectra with ingredients of gate-tunable conducting oxide (e.g. indium tin oxide) (<xref ref-type="bibr" rid="B26">Huang et&#x20;al., 2016</xref>) or phase-change materials (e.g., vanadium dioxide) (<xref ref-type="bibr" rid="B10">Cormier et&#x20;al., 2017</xref>).</p>
<p>Deep learning technology exhibits the huge potential in photonic structure design, material optimization, and even the optimization of the entire optical system. Besides the aforementioned work, it has been used for various intricate devices design, such as multi-mode converters (<xref ref-type="bibr" rid="B40">Liu et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B90">Zheng et&#x20;al., 2021</xref>), metagratings (<xref ref-type="bibr" rid="B27">Inampudi and Mosallaei, 2018</xref>; <xref ref-type="bibr" rid="B29">Jiang et&#x20;al., 2019</xref>), chiral metamaterials (<xref ref-type="bibr" rid="B45">Ma et&#x20;al., 2018</xref>) and photonic crystals (<xref ref-type="bibr" rid="B21">Hao et&#x20;al., 2019</xref>).</p>
</sec>
</sec>
<sec id="s3-2">
<title>3.2 Deep Learning for Optical Data Analysis</title>
<p>The optical techniques have been widely implemented in various fields. Large optical data will be generated when applying optical spectroscopy and imaging to medical diagnosis, information storage and optical communication. The conventional analysis of optical data is often based on the prior experiences and physical intuition. Yet it is time-consuming and error-prone when processing huge amount of the complex optical data, such as optical spectra and images. To tackle this challenge, various deep neural networks have been exploited. In the following part, some important work of deep learning in optical data analysis are introduced.</p>
<sec id="s3-2-1">
<title>3.2.1 Complex Spectra Analysis</title>
<p>The optical spectroscopy is the study of interaction between matter and light radiation as a function of the wavelength or frequency. From the spectral analysis, the chemical compositions and relative contents of the target analytes can be deduced. Deep learning provides an alternative way for a better extraction of the encoded information from the massive and complex spectra. For example, <xref ref-type="bibr" rid="B13">Fan et&#x20;al. (2019)</xref> implemented a CNN to analyze the Raman spectra and identify the components of mixtures, as shown in <xref ref-type="fig" rid="F4">Figure&#x20;4A</xref>. The training datasets contained the spectra of 94 ternary mixtures of methanol, acetonitrile and distilled water. The identification accuracy of CNN was up to 99.9% and the detected volume percentage of methanol was as low as 4%, which went beyond the conventional models, such as k-nearest neighbor. The proposed component identification algorithm is suitable for complex mixtures sensing and is potential for rapid disease diagnosis.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Deep learning for optical data analysis. <bold>(A)</bold> Working principle of CNN used in spectral analysis. Reproduced from <xref ref-type="bibr" rid="B13">Fan et&#x20;al. (2019)</xref> with permission from the Royal Society of Chemistry. <bold>(B)</bold> Schematic of 4-bit nanostructure geometry. <bold>(C)</bold> Illustration of CNN applied for information storage. Reproduced from <xref ref-type="bibr" rid="B79">Wiecha et&#x20;al. (2019)</xref> with permission from Springer Nature. <bold>(D)</bold> Schematic of CNN used for classification of differentiated cells. Reproduced from <xref ref-type="bibr" rid="B6">Buggenthin et&#x20;al. (2017)</xref> with permission from Springer Nature. <bold>(E)</bold> DNN-based DBP architecture. FOE: frequency-offset estimation, CPE: carrier-phase estimation. Reproduced from <xref ref-type="bibr" rid="B12">Fan et&#x20;al. (2020)</xref> with permission from Springer Nature. <bold>(F)</bold> The proposed CARE for image restoration. Reproduced from <xref ref-type="bibr" rid="B78">Weigert et&#x20;al. (2018)</xref> with permission from Springer Nature.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g004.tif"/>
</fig>
<p>The optical memory provides an intriguing solution for &#x201c;big data&#x201d; due to the high information capacity and longevity. However, the diffraction limit of light inevitably restricts the bit density in optical information storage. <xref ref-type="bibr" rid="B79">Wiecha et&#x20;al. (2019)</xref> encoded multiple bits of information in the subwavelength dielectric nanostructures by using a CNN and MLP, as illustrated in <xref ref-type="fig" rid="F4">Figure&#x20;4B</xref>. The scattering spectra were identified to extract the bit sequence. In the network training, the scattering spectra data propagated forward through the network, and the outputs of highly activated neurons indicated the encoded bit sequence (<xref ref-type="fig" rid="F4">Figure&#x20;4C</xref>). In this way, they efficiently improved the bit density up to 9-bits with quasi-error-free readout accuracy, which was of 40% higher information density than that of the Blu-ray. Furthermore, they simplified the readout process by probing few wavelengths of nanostructure scattering, i.e.,&#x20;the scattered RGB values of the dark-field microscopy images. This study provides a promising solution for high-density optical information storage based on the planar nanostructures.</p>
</sec>
<sec id="s3-2-2">
<title>3.2.2 Nonlinear Signal Processing</title>
<p>The long-haul optical communications face the fundamental bottlenecks, such as the fiber Kerr nonlinearity and chromatic dispersion. Deep learning, as a powerful tool, has been applied to fiber nonlinearity compensation in optical communications. <xref ref-type="bibr" rid="B12">Fan et&#x20;al. (2020)</xref> utilized the deep learning-based digital back-propagation (DBP) architecture for nonlinear optical signal processing, as shown in <xref ref-type="fig" rid="F4">Figure&#x20;4E</xref>. For a single-channel 28-GBaud 16-quadrature amplitude modulation system, the developed method demonstrated a 0.9-dB quality factor gain. This architecture was further extended to polarization-division-multiplexed (PDM) and wavelength-division-multiplexed (WDM). The quality factor gain of modified DBP were 0.6 and 0.25&#xa0;dB for the single channel PDM and WDM system, respectively. This work shows that deep learning provides an effective tool for theoretical understanding of the nonlinear fiber transmission.</p>
<p>In addition, deep learning has promoted the development of intelligent systems in fiber optic communication, such as eye map analyzers. <xref ref-type="bibr" rid="B76">Wang et&#x20;al. (2017)</xref> proposed an intelligent eye-diagram analyzer based on CNNs to achieve the modulation format recognition and optical signal-to-noise rate estimation in optical communications. Four commonly used optical signals by the simulations were obtained, which were then detected by the photodetectors. They collected 6,400&#x20;eye-diagram images from the oscilloscope as training sets. Each image in the training datasets had a 20-bits label vector. During the training process, CNNs gradually extracted the effective features of the input images and the back-propagation algorithm was exploited to optimize the kernel parameters. Consequently, the estimation accuracy nearly reached&#x20;100%.</p>
</sec>
<sec id="s3-2-3">
<title>3.2.3 Optical Images Processing</title>
<p>Optical imaging technology, such as fluorescence microscopy and super resolution imaging, have been considered as powerful tools in various areas. For example, image classification has been widely used for the medical image recognition. <xref ref-type="bibr" rid="B6">Buggenthin et&#x20;al. (2017)</xref> established a classifier by combining CNNs and RNNs for directly identifying differentiated cells. With massive bright-field images input, the convolutional layers extracted the local features and the concatenation layer combined the highest-level spatial features with cell displacement in differentiation process. The extracted features were fed into the RNNs to exploit the temporal information of the single-cell tracks for cells lineage prediction, as shown in <xref ref-type="fig" rid="F4">Figure&#x20;4D</xref>. They achieved the label-free identification of cells with differentially expressed lineage-specifying genes, and the lineage choice could be detected up to three generations. The model allows for analyzing the cell differentiation processes with high robustness and rapid prediction.</p>
<p>In the fluorescence microscopy, the observable phenomena of fluorescence microscopy is limited by the chemistry of fluorophores, and the maximum photon exposure that the sample can withstand. The cross-discipline of deep learning and bio-imaging provides an opportunity to overcome this tackle. For instance, <xref ref-type="bibr" rid="B78">Weigert et&#x20;al. (2018)</xref> proposed a content-aware image restoration (CARE) method to restore the microscopy images with enhanced performance. In <xref ref-type="fig" rid="F4">Figure&#x20;4F</xref>, the CNN architecture was trained on the well-registered pairs of images: a low signal-to-noise ratio (SNR) image as input and high SNR one as output. The CARE networks could maintain the microscopy images of high SNR even if the 60-fold light dosage was decreased. Besides, the isotropic resolution could be realized with the tenfold fewer axial slices. Impressively, they achieved the imaging speed by CARE of 20-times faster than that of the state-of-the-art reconstruction methods. The proposed CARE networks can extend the range of biological phenomena observable by microscopy, and can be automatically adapted to various image contents.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4 Optical Neural Networks</title>
<p>Integrated circuit chip is the mainstream hardware carrier, such as graphical processing units, central processing units and application-specific integrated circuits (<xref ref-type="bibr" rid="B49">Misra and Saha, 2010</xref>). However, the conventional electronic computing systems on von Nemumann architectures are insufficient for training and testing neural networks (<xref ref-type="bibr" rid="B51">Neumann, 2012</xref>). It is because that they separate the data space from the program space, and the tidal data load is generated between the computing unit and the memory. Photons exhibit the unique abilities of realizing multiple interconnections and simultaneously parallel calculations at the speed of light (<xref ref-type="bibr" rid="B83">Xu et&#x20;al., 2021</xref>). Thus, the optical neural networks (ONNs) constructed by the photonic devices, have opened a new road to achieving orders-of-magnitude improvements in both computation speed and energy consumption over the existing solutions (<xref ref-type="bibr" rid="B7">Cardenas et&#x20;al., 2009</xref>; <xref ref-type="bibr" rid="B84">Yang et&#x20;al., 2013</xref>). The ONNs have shown the potential for addressing the ever-growing demand of high-speed data analysis in complex applications, such as medical diagnosis, autonomous driving, and high-performance computing, as shown in <xref ref-type="fig" rid="F5">Figure&#x20;5</xref>. The platforms to achieving ONNs mainly include photonic circuits and optical diffractive layers as discussed following.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Development of photonic chip computing power and their representative application.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g005.tif"/>
</fig>
<p>Recently, <xref ref-type="bibr" rid="B66">Shen et&#x20;al. (2017)</xref> experimentally demonstrated an ONN by using a cascaded array of 56 programmable Mach-Zehnder interferometers on an integrated chip. Theoretically, they estimated that the proposed ONN could achieve 10<sup>11</sup>&#xa0;N-dimension matrix-vector multiplications per second, which was two orders of magnitude faster than the state-of-the-art electronic devices. To test the performance, they verified the utility in vowel recognition with measured accuracy of 76.7%. They claimed that the system could achieve a correctness of 90% with calibrations to reduce the thermal cross-talk, which was comparable to conventional 64-bits computer with accuracy of 91.7%. Noted that the optical nonlinearity unit by a saturable absorber was only modelled on a computer, and the power dissipation of data movement was significant in the current ONNs. There is still a long way to exploring the optical interconnects and optical computing units to realize the supremacy of&#x20;ONNs.</p>
<p>In addition to the photonic integrated circuit, the physical diffractive layers also provides a method for implementing neural networks algorithms. The optical diffraction of planar structures is mathematically a convolutional processing of input modulated fields and propagation functions. Thus the diffractive layers can be intuitively used to train ONNs. <xref ref-type="bibr" rid="B38">Lin et&#x20;al. (2018)</xref> pioneered the study of all-optical diffractive deep neural network (D<sup>2</sup>NN) architectures. The learning framework was based on multiple layers of 3D-printed diffractive surfaces, which was designed through a computer. They demonstrated that the trained D<sup>2</sup>NN could achieve the automated classification of handwritten digits (accuracy of 93.39%) and complex images datasets (Fashion MNIST, accuracy of 86.6%) with the 10 diffractive layers and 0.4 million neurons. The proposed D<sup>2</sup>NN shows the ability to operates at the speed of light, and it can be easily extended to billions of neurons and connections (<xref ref-type="bibr" rid="B38">Lin et&#x20;al., 2018</xref>).</p>
</sec>
<sec id="s5">
<title>5 Outlook and Conclusion</title>
<p>Deep learning usually needs large amounts of data support. However, it is impractical to collect massive databases by either physical simulations or experimental measurements. There are mainly two approaches to solving this problem. First, transfer learning allows migrating the knowledge of neural network trained from a certain physical process to other similar cases (<xref ref-type="bibr" rid="B74">Torrey and Shavlik, 2010</xref>). Specifically, the neural network pre-trained on high-quality datasets shows strong generalization ability, which can solve new problems with small datasets. Thus the data collection can be substantially reduced. Second, the burden of massive data collection can also be relieved by combining deep learning model with basic physical rules. For example, deep learning can be used as an intermediate step to effectively solve the Maxwell&#x2019;s equations (<xref ref-type="bibr" rid="B69">So et&#x20;al., 2020</xref>), rather than to directly find a mapping of optical structures and properties.</p>
<p>In the past few years, intelligent photonics has made great progress benefited from the interdisciplinary collaborations from researchers in the field of computer science and physical optics. To relieve the researchers from tedious and complex algorithm programming, a user-friendly system is highly on demand. This system should basically contain two parts: open-source resources and user-friendly interface, as shown in <xref ref-type="fig" rid="F6">Figure&#x20;6</xref>. Inspired by computer-science community, researchers are encouraged to share their datasets and neural networks to establish a comprehensive optical open-source community. Furthermore, the abundant open-source networks enable transfer learning to solve the various problems. The basic idea is to migrate data characteristics from related domains to improve the learning effect of the target tasks. Thereafter, when a deep learning network is needed to train and solve a specific photonic problem, we can directly call the relative database and well-trained neural networks from the open-source resources, which avoids the ab initio building of data collection.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>An conceived user-friendly software system for photonic structure design.</p>
</caption>
<graphic xlink:href="fmats-08-791296-g006.tif"/>
</fig>
<p>In the context of photonic structures, people are not only interested in some specific designs and their performances, but also in the general mechanism or principle that leads to the functionalities. The neural networks are considered as black-box models, which fit the training sets to directly provide the expected results. There is relentless effort for researchers to study the interpretability of neural networks. For instance, <xref ref-type="bibr" rid="B91">Zhou et&#x20;al. (2016)</xref> proved that by using global average pooling, CNNs could retain remarkable localization ability, which exposed the implicit attention of CNNs on image-level labels. The remarkable localization ability are probably transferred to physical interpretability of the photonic devices design.</p>
<p>In this review, we have surveyed the recent development of deep learning in the field of photonics, including photonic structure design and optical data analysis. Optical neural networks are also emerging to reform the conventional electronic-circuit architecture for deep learning with high computational power and low energy consumption. We have witnessed the interactions between the deep learning and photonics, and look forward to more exciting works in the interdisciplinary&#x20;field.</p>
</sec>
</body>
<back>
<sec id="s6">
<title>Author Contributions</title>
<p>Conceptualization, D-QY and J-hC; methodology, BD and BW; original draft preparation, BD, BW and J-hC; writing-review and editing, D-QY, J-hC and HC. All authors have read and agreed to the submitted version of the manuscript.</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>The authors would like to thank the support from National Natural Science Foundation of China (11974058, 62005231); Beijing Nova Program (Z201100006820125) from Beijing Municipal Science and Technology Commission; Beijing Natural Science Foundation (Z210004); State Key Laboratory of Information Photonics and Optical Communications (IPOC2021ZT01), BUPT, China; Fundamental Research Funds for the Central Universities (20720200074, 20720210045); Guangdong Basic and Applied Basic Research Foundation (2021A1515012199).</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abbeel</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Coates</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ng</surname>
<given-names>A. Y.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Autonomous Helicopter Aerobatics through Apprenticeship Learning</article-title>. <source>Int. J.&#x20;Robotics Res.</source> <volume>29</volume>, <fpage>1608</fpage>&#x2013;<lpage>1639</lpage>. <pub-id pub-id-type="doi">10.1177/0278364910371999</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Anjit</surname>
<given-names>T. A.</given-names>
</name>
<name>
<surname>Benny</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cherian</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Mythili</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Non-iterative Microwave Imaging Solutions for Inverse Problems Using Deep Learning</article-title>. <source>Pier M</source> <volume>102</volume>, <fpage>53</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.2528/pierm21021304</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Asano</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Noda</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Optimization of Photonic crystal Nanocavities Based on Deep Learning</article-title>. <source>Opt. Express</source> <volume>26</volume>, <fpage>32704</fpage>&#x2013;<lpage>32717</lpage>. <pub-id pub-id-type="doi">10.1364/OE.26.032704</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barbastathis</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ozcan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Situ</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>On the Use of Deep Learning for Computational Imaging</article-title>. <source>Optica</source> <volume>6</volume>, <fpage>921</fpage>&#x2013;<lpage>943</lpage>. <pub-id pub-id-type="doi">10.1364/OPTICA.6.000921</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bigio</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Sergio</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Quantitative Biomedical Optics: Theory, Methods, and Applications</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buggenthin</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Buettner</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hoppe</surname>
<given-names>P. S.</given-names>
</name>
<name>
<surname>Endele</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kroiss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Strasser</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Prospective Identification of Hematopoietic Lineage Choice by Deep Learning</article-title>. <source>Nat. Methods</source> <volume>14</volume>, <fpage>403</fpage>&#x2013;<lpage>406</lpage>. <pub-id pub-id-type="doi">10.1038/nmeth.4182</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cardenas</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Poitras</surname>
<given-names>C. B.</given-names>
</name>
<name>
<surname>Robinson</surname>
<given-names>J.&#x20;T.</given-names>
</name>
<name>
<surname>Preston</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Lipson</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Low Loss Etchless Silicon Photonic Waveguides</article-title>. <source>Opt. Express</source> <volume>17</volume>, <fpage>4752</fpage>&#x2013;<lpage>4757</lpage>. <pub-id pub-id-type="doi">10.1364/OE.17.004752</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Siegel</surname>
<given-names>E. L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Will Machine Learning End the Viability of Radiology as a Thriving Medical Specialty?</article-title> <source>Bjr</source> <volume>92</volume>, <fpage>20180416</fpage>. <pub-id pub-id-type="doi">10.1259/bjr.20180416</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rocca</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A Review of Deep Learning Approaches for Inverse Scattering Problems (Invited Review)</article-title>. <source>Pier</source> <volume>167</volume>, <fpage>67</fpage>&#x2013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.2528/PIER20030705</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cormier</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Son</surname>
<given-names>T. V.</given-names>
</name>
<name>
<surname>Thibodeau</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Doucet</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Truong</surname>
<given-names>V.-V.</given-names>
</name>
<name>
<surname>Hach&#xe9;</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Vanadium Dioxide as a Material to Control Light Polarization in the Visible and Near Infrared</article-title>. <source>Opt. Commun.</source> <volume>382</volume>, <fpage>80</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1016/j.optcom.2016.07.070</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dong</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.-K.</given-names>
</name>
<name>
<surname>Duan</surname>
<given-names>G.-H.</given-names>
</name>
<name>
<surname>Neilson</surname>
<given-names>D. T.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Silicon Photonic Devices and Integrated Circuits</article-title>. <source>Nanophotonics</source> <volume>3</volume>, <fpage>215</fpage>&#x2013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1515/nanoph-2013-0023</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Gui</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lau</surname>
<given-names>A. P. T.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Advancing Theoretical Understanding and Practical Performance of Signal Processing for Nonlinear Optical Communications through Machine Learning</article-title>. <source>Nat. Commun.</source> <volume>11</volume>, <fpage>3694</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-020-17516-7</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Ming</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep Learning-Based Component Identification for the Raman Spectra of Mixtures</article-title>. <source>Analyst</source> <volume>144</volume>, <fpage>1789</fpage>&#x2013;<lpage>1798</lpage>. <pub-id pub-id-type="doi">10.1039/C8AN02212G</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Swain</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Unni</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Decoding Optical Data with Machine Learning</article-title>. <source>Laser Photon. Rev.</source> <volume>15</volume>, <fpage>2000422</fpage>. <pub-id pub-id-type="doi">10.1002/lpor.202000422</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Mei</surname>
<given-names>T.</given-names>
</name>
</person-group> &#x201c;<article-title>Look Closer to See Better: Recurrent Attention Convolutional Neural Network for fine-grained Image Recognition</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Honolulu, HI, USA</conf-loc>, <conf-date>July 2017</conf-date>, <fpage>4438</fpage>&#x2013;<lpage>4446</lpage>. </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ghosh</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Stuke</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Todorovi&#x107;</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>J&#xf8;rgensen</surname>
<given-names>P. B.</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>M. N.</given-names>
</name>
<name>
<surname>Vehtari</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Deep Learning Spectroscopy: Neural Networks for Molecular Excitation Spectra</article-title>. <source>Adv. Sci.</source> <volume>6</volume>, <fpage>1801367</fpage>. <pub-id pub-id-type="doi">10.1002/advs.201801367</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goi</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Cumming</surname>
<given-names>B. P.</given-names>
</name>
<name>
<surname>Schoenhardt</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Luan</surname>
<given-names>H.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>Nanoprinted High-Neuron-Density Optical Linear Perceptrons Performing Near-Infrared Inference on a CMOS Chip</article-title>. <source>Light Sci. Appl.</source> <volume>10</volume>, <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41377-021-00483-z</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Goodfellow</surname>
<given-names>I. J.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Courville</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Deep Learning</source>. <publisher-loc>Cambridge, Massachusetts</publisher-loc>: <publisher-name>MIT press</publisher-name>. </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodfellow</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Pouget-Abadie</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mirza</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Warde-Farley</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ozair</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>Generative Adversarial Nets</article-title>. <source>Adv. Neural Inf. Process. Syst.</source> <volume>27</volume> [<comment>arXiv:1406.2661</comment>]. </citation>
</ref>
<ref id="B20">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Dong</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>Y.</given-names>
</name>
</person-group> &#x201c;<article-title>Simple Convolutional Neural Network on Image Classification</article-title>,&#x201d; in <conf-name>2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA)</conf-name>, <conf-loc>Beijing, China</conf-loc>, <conf-date>March 2017</conf-date>, <fpage>721</fpage>&#x2013;<lpage>724</lpage>. </citation>
</ref>
<ref id="B21">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hao</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>Y.</given-names>
</name>
</person-group> &#x201c;<article-title>Inverse Design of Photonic Crystal Nanobeam Cavity Structure via Deep Neural Network</article-title>,&#x201d; in <conf-name>Proceedings of the Asia Communications and Photonics Conference, ed. M4A.296 (Optical Society of America)</conf-name>, <conf-loc>Chengdu, China</conf-loc>, <conf-date>November 2019</conf-date>, <fpage>1597</fpage>&#x2013;<lpage>1600</lpage>. </citation>
</ref>
<ref id="B22">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>He</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Ren</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
</person-group> &#x201c;<article-title>Deep Residual Learning for Image Recognition</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</conf-name>, <conf-date>June 2016Las Vegas, NV, USA</conf-date>, <fpage>770</fpage>&#x2013;<lpage>778</lpage>. </citation>
</ref>
<ref id="B23">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hijazi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rowen</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2015</year>). <source>Using Convolutional Neural Networks for Image Recognition</source>. <publisher-loc>San Jose, CA, USA</publisher-loc>: <publisher-name>Cadence Design Systems Inc.</publisher-name>, <fpage>1</fpage>&#x2013;<lpage>12</lpage>. </citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hochreiter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schmidhuber</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>Long Short-Term Memory</article-title>. <source>Neural Comput.</source> <volume>9</volume>, <fpage>1735</fpage>&#x2013;<lpage>1780</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1997.9.8.1735</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>G.</given-names>
</name>
</person-group> &#x201c;<article-title>Squeeze-and-excitation Networks</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Salt Lake City, UT, USA</conf-loc>, <conf-date>June 2018</conf-date>, <fpage>7132</fpage>&#x2013;<lpage>7141</lpage>. </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>Y.-W.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>H. W. H.</given-names>
</name>
<name>
<surname>Sokhoyan</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pala</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Thyagarajan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). <article-title>Gate-tunable Conducting Oxide Metasurfaces</article-title>. <source>Nano Lett.</source> <volume>16</volume>, <fpage>5319</fpage>&#x2013;<lpage>5325</lpage>. <pub-id pub-id-type="doi">10.1021/acs.nanolett.6b00555</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Inampudi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mosallaei</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Neural Network Based Design of Metagratings</article-title>. <source>Appl. Phys. Lett.</source> <volume>112</volume>, <fpage>241102</fpage>. <pub-id pub-id-type="doi">10.1063/1.5033327</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>J.&#x20;A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Deep Neural Networks for the Evaluation and Design of Photonic Devices</article-title>. <source>Nat. Rev. Mater.</source> <volume>6</volume>, <fpage>679</fpage>&#x2013;<lpage>700</lpage>. <pub-id pub-id-type="doi">10.1038/s41578-020-00260-1</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sell</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hoyer</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hickey</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>J.&#x20;A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Free-form Diffractive Metagrating Design Based on Generative Adversarial Networks</article-title>. <source>ACS Nano</source> <volume>13</volume>, <fpage>8872</fpage>&#x2013;<lpage>8878</lpage>. <pub-id pub-id-type="doi">10.1021/acsnano.9b02371</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karmakar</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Chowdhury</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Das</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kamruzzaman</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Islam</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Assessing Trust Level of a Driverless Car Using Deep Learning</article-title>. <source>IEEE Trans. Intell. Transport. Syst.</source> <volume>22</volume>, <fpage>4457</fpage>&#x2013;<lpage>4466</lpage>. <pub-id pub-id-type="doi">10.1109/TITS.2021.3059261</pub-id> </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Khan</surname>
<given-names>F. N.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lau</surname>
<given-names>A. P. T.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>An Optical Communication&#x27;s Perspective on Machine Learning and its Applications</article-title>. <source>J.&#x20;Lightwave Technol.</source> <volume>37</volume>, <fpage>493</fpage>&#x2013;<lpage>516</lpage>. <pub-id pub-id-type="doi">10.1109/JLT.2019.2897313</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krizhevsky</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sutskever</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>G. E.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Imagenet Classification with Deep Convolutional Neural Networks</article-title>. <source>Adv. Neural Inf. Process. Syst.</source> <volume>25</volume>, <fpage>1097</fpage>&#x2013;<lpage>1105</lpage>. <pub-id pub-id-type="doi">10.1145/3065386</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>LeCun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Deep Learning</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>436</fpage>&#x2013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1038/nature14539</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lecun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Bottou</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Haffner</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>Gradient-based Learning Applied to Document Recognition</article-title>. <source>Proc. IEEE</source> <volume>86</volume>, <fpage>2278</fpage>&#x2013;<lpage>2324</lpage>. <pub-id pub-id-type="doi">10.1109/5.726791</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Cen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Applications of Neural Networks for Spectrum Prediction and Inverse Design in the Terahertz Band</article-title>. <source>IEEE Photon. J.</source> <volume>12</volume>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1109/JPHOT.2020.3022053</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>D. D.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>M.</given-names>
</name>
</person-group> &#x201c;<article-title>Medical Image Classification with Convolutional Neural Network</article-title>,&#x201d; in <conf-name>Proceedings of the 2014&#x20;13th International Conference on Control Automation Robotics &#x26; Vision (ICARCV)</conf-name>, <conf-loc>Singapore</conf-loc>, <conf-date>December 2014</conf-date>, <fpage>844</fpage>&#x2013;<lpage>848</lpage>. </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liao</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>He</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zeng</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>H.-J.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep Learning-Based Data Storage for Low Latency in Data center Networks</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>26411</fpage>&#x2013;<lpage>26417</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2901742</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Rivenson</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yardimci</surname>
<given-names>N. T.</given-names>
</name>
<name>
<surname>Veli</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jarrahi</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>All-optical Machine Learning Using Diffractive Deep Neural Networks</article-title>. <source>Science</source> <volume>361</volume>, <fpage>1004</fpage>&#x2013;<lpage>1008</lpage>. <pub-id pub-id-type="doi">10.1126/science.aat8084</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Khoram</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Training Deep Neural Networks for the Inverse Design of Nanophotonic Structures</article-title>. <source>ACS Photon.</source> <volume>5</volume>, <fpage>1365</fpage>&#x2013;<lpage>1369</lpage>. <pub-id pub-id-type="doi">10.1021/acsphotonics.7b01377</pub-id> </citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Xie</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Arbitrarily Routed Mode-Division Multiplexed Photonic Circuits for Dense Integration</article-title>. <source>Nat. Commun.</source> <volume>10</volume>, <fpage>3263</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-019-11196-8</pub-id> </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K. T.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Raju</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Compounding Meta&#x2010;Atoms into Metamolecules with Hybrid Artificial Intelligence Techniques</article-title>. <source>Adv. Mater.</source> <volume>32</volume>, <fpage>1904790</fpage>. <pub-id pub-id-type="doi">10.1002/adma.201904790</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lundervold</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Lundervold</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>An Overview of Deep Learning in Medical Imaging Focusing on MRI</article-title>. <source>Z. f&#xfc;r Medizinische Physik</source> <volume>29</volume>, <fpage>102</fpage>&#x2013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.1016/j.zemedi.2018.11.002</pub-id> </citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luongo</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hakim</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>J.&#x20;H.</given-names>
</name>
<name>
<surname>Anandkumar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hung</surname>
<given-names>A. J.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Deep Learning-Based Computer Vision to Recognize and Classify Suturing Gestures in Robot-Assisted Surgery</article-title>. <source>Surgery</source> <volume>169</volume>, <fpage>1240</fpage>&#x2013;<lpage>1244</lpage>. <pub-id pub-id-type="doi">10.1016/j.surg.2020.08.016</pub-id> </citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2021a</year>). <article-title>Intelligent Algorithms: New Avenues for Designing Nanophotonic Devices</article-title>. <source>China Opt. Express</source> <volume>19</volume>, <fpage>011301</fpage>. <pub-id pub-id-type="doi">10.3788/COL202119.011301</pub-id> </citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Deep-learning-enabled On-Demand Design of Chiral Metamaterials</article-title>. <source>ACS Nano</source> <volume>12</volume>, <fpage>6326</fpage>&#x2013;<lpage>6334</lpage>. <pub-id pub-id-type="doi">10.1021/acsnano.8b03569</pub-id> </citation>
</ref>
<ref id="B46">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Probabilistic Representation and Inverse Design of Metamaterials Based on a Deep Generative Model with Semi&#x2010;Supervised Learning Strategy</article-title>. <source>Adv. Mater.</source> <volume>31</volume>, <fpage>1901111</fpage>. <pub-id pub-id-type="doi">10.1002/adma.201901111</pub-id> </citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Kudyshev</surname>
<given-names>Z. A.</given-names>
</name>
<name>
<surname>Boltasseva</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2021b</year>). <article-title>Deep Learning for the Design of Photonic Structures</article-title>. <source>Nat. Photon.</source> <volume>15</volume>, <fpage>77</fpage>&#x2013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1038/s41566-020-0685-y</pub-id> </citation>
</ref>
<ref id="B48">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Malkiel</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Mrejen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nagler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Arieli</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Wolf</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Suchowski</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Plasmonic Nanostructure Design and Characterization via Deep Learning</article-title>. <source>Light Sci. Appl.</source> <volume>7</volume>, <fpage>60</fpage>. <pub-id pub-id-type="doi">10.1038/s41377-018-0060-7</pub-id> </citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Misra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Saha</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Artificial Neural Networks in Hardware: a Survey of Two Decades of Progress</article-title>. <source>Neurocomputing</source> <volume>74</volume>, <fpage>239</fpage>&#x2013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2010.03.021</pub-id> </citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moen</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bannon</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kudo</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Graf</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Covert</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Van Valen</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep Learning for Cellular Image Analysis</article-title>. <source>Nat. Methods</source> <volume>16</volume>, <fpage>1233</fpage>&#x2013;<lpage>1246</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-019-0403-1</pub-id> </citation>
</ref>
<ref id="B51">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Neumann</surname>
<given-names>J.&#x20;V.</given-names>
</name>
</person-group> (<year>2012</year>). <source>The Computer and the Brain</source>. <publisher-loc>New Haven, Connecticut</publisher-loc>: <publisher-name>Yale University Press</publisher-name>. </citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ntziachristos</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Going Deeper Than Microscopy: the Optical Imaging Frontier in Biology</article-title>. <source>Nat. Methods</source> <volume>7</volume>, <fpage>603</fpage>&#x2013;<lpage>614</lpage>. <pub-id pub-id-type="doi">10.1038/nmeth.1483</pub-id> </citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peurifoy</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jing</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Cano-Renteria</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>DeLacy</surname>
<given-names>B. G.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Nanophotonic Particle Simulation and Inverse Design Using Artificial Neural Networks</article-title>. <source>Sci. Adv.</source> <volume>4</volume>, <fpage>eaar4206</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.aar4206</pub-id> </citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pfeiffer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ray</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>L. J.</given-names>
</name>
<name>
<surname>Grbic</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>High Performance Bianisotropic Metasurfaces: Asymmetric Transmission of Light</article-title>. <source>Phys. Rev. Lett.</source> <volume>113</volume>, <fpage>023902</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.113.023902</pub-id> </citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Popel</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tomkova</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tomek</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kaiser</surname>
<given-names>&#x141;.</given-names>
</name>
<name>
<surname>Uszkoreit</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bojar</surname>
<given-names>O.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Transforming Machine Translation: a Deep Learning System Reaches News Translation Quality Comparable to Human Professionals</article-title>. <source>Nat. Commun.</source> <volume>11</volume>, <fpage>4381</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-020-18073-9</pub-id> </citation>
</ref>
<ref id="B56">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>C.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). <article-title>Catenary Optics for Achromatic Generation of Perfect Optical Angular Momentum</article-title>. <source>Sci. Adv.</source> <volume>1</volume>, <fpage>e1500396</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.1500396</pub-id> </citation>
</ref>
<ref id="B57">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Qian</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Jing</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Shen</surname>
<given-names>L.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Deep-learning-enabled Self-Adaptive Microwave Cloak without Human Intervention</article-title>. <source>Nat. Photon.</source> <volume>14</volume>, <fpage>383</fpage>&#x2013;<lpage>390</lpage>. <pub-id pub-id-type="doi">10.1038/s41566-020-0604-2</pub-id> </citation>
</ref>
<ref id="B58">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Qin</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Multifrequency Superscattering Pattern Shaping</article-title>. <source>China Opt. Express</source> <volume>19</volume>, <fpage>123601</fpage>. <pub-id pub-id-type="doi">10.3788/col202119.123601</pub-id> </citation>
</ref>
<ref id="B59">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raeker</surname>
<given-names>B. O.</given-names>
</name>
<name>
<surname>Grbic</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Compound Metaoptics for Amplitude and Phase Control of Wave Fronts</article-title>. <source>Phys. Rev. Lett.</source> <volume>122</volume>, <fpage>113901</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.122.113901</pub-id> </citation>
</ref>
<ref id="B60">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rav&#xec;</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Deligianni</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Berthelot</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Andreu-Perez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lo</surname>
<given-names>B.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Deep Learning for Health Informatics</article-title>. <source>IEEE J.&#x20;Biomed. Health Inform.</source> <volume>21</volume>, <fpage>4</fpage>&#x2013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1109/JBHI.2016.2636665</pub-id> </citation>
</ref>
<ref id="B61">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Shao</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Salim</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Three-dimensional Vectorial Holography Based on Machine Learning Inverse Design</article-title>. <source>Sci. Adv.</source> <volume>6</volume>, <fpage>eaaz4261</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.aaz4261</pub-id> </citation>
</ref>
<ref id="B62">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rickard</surname>
<given-names>J.&#x20;J.&#x20;S.</given-names>
</name>
<name>
<surname>Di-Pietro</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Davies</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Belli</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Oppenheimer</surname>
<given-names>P. G.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Rapid Optofluidic Detection of Biomarkers for Traumatic Brain Injury via Surface-Enhanced Raman Spectroscopy</article-title>. <source>Nat. Biomed. Eng.</source> <volume>4</volume>, <fpage>610</fpage>&#x2013;<lpage>623</lpage>. <pub-id pub-id-type="doi">10.1038/s41551-019-0510-4</pub-id> </citation>
</ref>
<ref id="B63">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rivenson</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>de Haan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Virtual Histological Staining of Unlabelled Tissue-Autofluorescence Images via Deep Learning</article-title>. <source>Nat. Biomed. Eng.</source> <volume>3</volume>, <fpage>466</fpage>&#x2013;<lpage>477</lpage>. <pub-id pub-id-type="doi">10.1038/s41551-019-0362-y</pub-id> </citation>
</ref>
<ref id="B64">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwanecke</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Fedotov</surname>
<given-names>V. A.</given-names>
</name>
<name>
<surname>Khardikov</surname>
<given-names>V. V.</given-names>
</name>
<name>
<surname>Prosvirnin</surname>
<given-names>S. L.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zheludev</surname>
<given-names>N. I.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Nanostructured Metal Film with Asymmetric Optical Transmission</article-title>. <source>Nano Lett.</source> <volume>8</volume>, <fpage>2940</fpage>&#x2013;<lpage>2943</lpage>. <pub-id pub-id-type="doi">10.1021/nl801794d</pub-id> </citation>
</ref>
<ref id="B65">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sharma</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Q.</given-names>
</name>
</person-group> &#x201c;<article-title>Transient Electromagnetic Modeling Using Recurrent Neural Networks</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE MTT-S International Microwave Symposium Digest, 2005</conf-name>, <conf-loc>Long Beach, CA, USA</conf-loc>, <conf-date>June 2005</conf-date>, <fpage>1597</fpage>&#x2013;<lpage>1600</lpage>. </citation>
</ref>
<ref id="B66">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>N. C.</given-names>
</name>
<name>
<surname>Skirlo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Prabhu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Baehr-Jones</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hochberg</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Deep Learning with Coherent Nanophotonic Circuits</article-title>. <source>Nat. Photon</source> <volume>11</volume>, <fpage>441</fpage>&#x2013;<lpage>446</lpage>. <pub-id pub-id-type="doi">10.1038/nphoton.2017.93</pub-id> </citation>
</ref>
<ref id="B67">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simonyan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Very Deep Convolutional Networks for Large-Scale Image Recognition</article-title>. <source>arXiv</source>, <fpage>1097</fpage>&#x2013;<lpage>1105</lpage>. <comment>preprint arXiv:1409.1556</comment>. </citation>
</ref>
<ref id="B68">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Singh</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Optical Character Recognition Techniques: a Survey</article-title>. <source>J.&#x20;emerging Trends Comput. Inf. Sci.</source> <volume>4</volume>, <fpage>545</fpage>&#x2013;<lpage>550</lpage>. <pub-id pub-id-type="doi">10.1142/S0218001491000041</pub-id> </citation>
</ref>
<ref id="B69">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>So</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Badloe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Noh</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bravo-Abad</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rho</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Deep Learning Enabled Inverse Design in Nanophotonics</article-title>. <source>Nanophotonics</source> <volume>9</volume>, <fpage>1041</fpage>&#x2013;<lpage>1057</lpage>. <pub-id pub-id-type="doi">10.1515/nanoph-2019-0474</pub-id> </citation>
</ref>
<ref id="B70">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>So</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rho</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Simultaneous Inverse Design of Materials and Structures via Deep Learning: Demonstration of Dipole Resonance Engineering Using Core-Shell Nanoparticles</article-title>. <source>ACS Appl. Mater. Inter.</source> <volume>11</volume>, <fpage>24264</fpage>&#x2013;<lpage>24268</lpage>. <pub-id pub-id-type="doi">10.1021/acsami.9b05857</pub-id> </citation>
</ref>
<ref id="B71">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sui</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A Review of Optical Neural Networks</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>70773</fpage>&#x2013;<lpage>70783</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2987333</pub-id> </citation>
</ref>
<ref id="B72">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Szegedy</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Jia</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sermanet</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Reed</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Anguelov</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> &#x201c;<article-title>Going Deeper with Convolutions</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</conf-name>, <conf-loc>Boston, MA</conf-loc>, <conf-date>June 2015</conf-date>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. </citation>
</ref>
<ref id="B73">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>T&#xf6;r&#xf6;k</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kao</surname>
<given-names>F. J.</given-names>
</name>
</person-group> (<year>2007</year>). <source>Optical Imaging and Microscopy: Techniques and Advanced Systems</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>. </citation>
</ref>
<ref id="B74">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Torrey</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Shavlik</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>Transfer Learning</article-title>,&#x201d; in <source>Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques (IGI Global)</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Olivas</surname>
<given-names>E. S.</given-names>
</name>
<name>
<surname>Guerrero</surname>
<given-names>J.&#x20;D. M.</given-names>
</name>
<name>
<surname>Sober</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Benedito</surname>
<given-names>J.&#x20;R. M.</given-names>
</name>
<name>
<surname>Lopez</surname>
<given-names>A. J.&#x20;S.</given-names>
</name>
</person-group> (<publisher-loc>Hershey, PA</publisher-loc>: <publisher-name>IGI Publishing</publisher-name>), <fpage>242</fpage>&#x2013;<lpage>264</lpage>. <pub-id pub-id-type="doi">10.4018/978-1-60566-766-9.ch011</pub-id> </citation>
</ref>
<ref id="B75">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vukusic</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Sambles</surname>
<given-names>J.&#x20;R.</given-names>
</name>
</person-group> (<year>2003</year>). <article-title>Photonic Structures in Biology</article-title>. <source>Nature</source> <volume>424</volume>, <fpage>852</fpage>&#x2013;<lpage>855</lpage>. <pub-id pub-id-type="doi">10.1038/nature01941</pub-id> </citation>
</ref>
<ref id="B76">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cui</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Modulation Format Recognition and OSNR Estimation Using CNN-Based Deep Learning</article-title>. <source>IEEE Photon. Technol. Lett.</source> <volume>29</volume>, <fpage>1667</fpage>&#x2013;<lpage>1670</lpage>. <pub-id pub-id-type="doi">10.1109/LPT.2017.2742553</pub-id> </citation>
</ref>
<ref id="B77">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>V.-C.</given-names>
</name>
<name>
<surname>Lai</surname>
<given-names>Y.-C.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>M.-K.</given-names>
</name>
<name>
<surname>Kuo</surname>
<given-names>H. Y.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>A Broadband Achromatic Metalens in the Visible</article-title>. <source>Nat. Nanotech</source> <volume>13</volume>, <fpage>227</fpage>&#x2013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1038/s41565-017-0052-4</pub-id> </citation>
</ref>
<ref id="B78">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weigert</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Boothe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>M&#xfc;ller</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Dibrov</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Jain</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Content-aware Image Restoration: Pushing the Limits of Fluorescence Microscopy</article-title>. <source>Nat. Methods</source> <volume>15</volume>, <fpage>1090</fpage>&#x2013;<lpage>1097</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-018-0216-7</pub-id> </citation>
</ref>
<ref id="B79">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiecha</surname>
<given-names>P. R.</given-names>
</name>
<name>
<surname>Lecestre</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Mallet</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Larrieu</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Pushing the Limits of Optical Information Storage Using Deep Learning</article-title>. <source>Nat. Nanotechnol.</source> <volume>14</volume>, <fpage>237</fpage>&#x2013;<lpage>244</lpage>. <pub-id pub-id-type="doi">10.1038/s41565-018-0346-1</pub-id> </citation>
</ref>
<ref id="B80">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiecha</surname>
<given-names>P. R.</given-names>
</name>
<name>
<surname>Muskens</surname>
<given-names>O. L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep Learning Meets Nanophotonics: a Generalized Accurate Predictor for Near fields and Far fields of Arbitrary 3D Nanostructures</article-title>. <source>Nano Lett.</source> <volume>20</volume>, <fpage>329</fpage>&#x2013;<lpage>338</lpage>. <pub-id pub-id-type="doi">10.1021/acs.nanolett.9b03971</pub-id> </citation>
</ref>
<ref id="B81">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Hao</surname>
<given-names>Z.-L.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J.-H.</given-names>
</name>
<name>
<surname>Bao</surname>
<given-names>Q.-L.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.-N.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.-Y.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Total Transmission from Deep Learning Designs</article-title>. <source>J.&#x20;Electron. Sci. Technol.</source> <volume>20</volume>, <fpage>100146</fpage>. <pub-id pub-id-type="doi">10.1016/j.jnlest.2021.100146</pub-id> </citation>
</ref>
<ref id="B82">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Schuster</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Le</surname>
<given-names>Q. V.</given-names>
</name>
<name>
<surname>Norouzi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Macherey</surname>
<given-names>W.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). <article-title>Google&#x2019;s Neural Machine Translation System: Bridging the gap between Human and Machine Translation</article-title>. <source>arXiv</source>. <comment>preprint arXiv:1609.08144</comment>. </citation>
</ref>
<ref id="B83">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Corcoran</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Boes</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>T. G.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>11 TOPS Photonic Convolutional Accelerator for Optical Neural Networks</article-title>. <source>Nature</source> <volume>589</volume>, <fpage>44</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-020-03063-0</pub-id> </citation>
</ref>
<ref id="B84">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>On-chip Optical Matrix-Vector Multiplier</article-title>. <source>Proc. SPIE</source> <volume>8855</volume>. <pub-id pub-id-type="doi">10.1117/12.2028585</pub-id> </citation>
</ref>
<ref id="B85">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yoon</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Nam</surname>
<given-names>K. T.</given-names>
</name>
<name>
<surname>Rho</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Pragmatic Metasurface Hologram at Visible Wavelength: the Balance between Diffraction Efficiency and Fabrication Compatibility</article-title>. <source>ACS Photon.</source> <volume>5</volume>, <fpage>1643</fpage>&#x2013;<lpage>1647</lpage>. <pub-id pub-id-type="doi">10.1021/acsphotonics.7b01044</pub-id> </citation>
</ref>
<ref id="B86">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yu</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Capasso</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Flat Optics with Designer Metasurfaces</article-title>. <source>Nat. Mater</source> <volume>13</volume>, <fpage>139</fpage>&#x2013;<lpage>150</lpage>. <pub-id pub-id-type="doi">10.1038/nmat3839</pub-id> </citation>
</ref>
<ref id="B87">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zeiler</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Fergus</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Visualizing and Understanding Convolutional Networks</article-title>,&#x201d; in <source>European Conference on Computer Vision</source> (<publisher-name>Springer</publisher-name>), <fpage>818</fpage>&#x2013;<lpage>833</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-10590-1_53</pub-id> </citation>
</ref>
<ref id="B88">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Pu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>J.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>All-dielectric Metasurfaces for Simultaneous Giant Circular Asymmetric Transmission and Wavefront Shaping Based on Asymmetric Photonic Spin-Orbit Interactions</article-title>. <source>Adv. Funct. Mater.</source> <volume>27</volume>, <fpage>1704295</fpage>. <pub-id pub-id-type="doi">10.1002/adfm.201704295</pub-id> </citation>
</ref>
<ref id="B89">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Superlenses to Overcome the Diffraction Limit</article-title>. <source>Nat. Mater</source> <volume>7</volume>, <fpage>435</fpage>&#x2013;<lpage>441</lpage>. <pub-id pub-id-type="doi">10.1038/nmat2141</pub-id> </citation>
</ref>
<ref id="B90">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zheng</surname>
<given-names>Z.-h.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>H.-y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J.-H.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Ultra-compact Reconfigurable Device for Mode Conversion and Dual-Mode DPSK Demodulation via Inverse Design</article-title>. <source>Opt. Express</source> <volume>29</volume>, <fpage>17718</fpage>&#x2013;<lpage>17725</lpage>. <pub-id pub-id-type="doi">10.1364/OE.420874</pub-id> </citation>
</ref>
<ref id="B91">
<citation citation-type="other">
<person-group person-group-type="author">
<name>
<surname>Zhou</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Khosla</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lapedriza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Oliva</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Torralba</surname>
<given-names>A.</given-names>
</name>
</person-group> &#x201c;<article-title>Learning Deep Features for Discriminative Localization</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE conference on computer vision and pattern recognition</conf-name>, <conf-loc>Las Vegas, NV, USA</conf-loc>, <conf-date>December 2016</conf-date>, <fpage>2921</fpage>&#x2013;<lpage>2929</lpage>. </citation>
</ref>
</ref-list>
</back>
</article>