<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Nanotechnol.</journal-id>
<journal-title>Frontiers in Nanotechnology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Nanotechnol.</abbrev-journal-title>
<issn pub-type="epub">2673-3013</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnano.2021.645995</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Nanotechnology</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Advances in Memristor-Based Neural Networks</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Xu</surname> <given-names>Weilin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1183314/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Wang</surname> <given-names>Jingjuan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1266272/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Yan</surname> <given-names>Xiaobing</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1019689/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Key Laboratory of Brain-Like Neuromorphic Devices and Systems of Hebei Province, College of Electron and Information Engineering, Hebei University</institution>, <addr-line>Baoding</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology</institution>, <addr-line>Guilin</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Electrical and Computer Engineering Department, Southern Illinois University Carbondale</institution>, <addr-line>Carbondale, IL</addr-line>, <country>United States</country></aff>
<aff id="aff4"><sup>4</sup><institution>Department of Materials Science and Engineering, National University of Singapore</institution>, <addr-line>Singapore</addr-line>, <country>Singapore</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: J. Joshua Yang, University of Southern California, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Rivu Midya, University of Massachusetts Amherst, United States; Zhongrui Wang, The University of Hong Kong, Hong Kong</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Weilin Xu <email>xwl&#x00040;guet.edu.cn</email></corresp>
<corresp id="c002">Xiaobing Yan <email>xiaobing_yan&#x00040;126.com</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Nanodevices, a section of the journal Frontiers in Nanotechnology</p></fn>
<fn fn-type="other" id="fn002"><p>&#x02020;These authors have contributed equally to this work</p></fn></author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>03</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>3</volume>
<elocation-id>645995</elocation-id>
<history>
<date date-type="received">
<day>24</day>
<month>12</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>03</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Xu, Wang and Yan.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Xu, Wang and Yan</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract><p>The rapid development of artificial intelligence (AI), big data analytics, cloud computing, and Internet of Things applications expect the emerging memristor devices and their hardware systems to solve massive data calculation with low power consumption and small chip area. This paper provides an overview of memristor device characteristics, models, synapse circuits, and neural network applications, especially for artificial neural networks and spiking neural networks. It also provides research summaries, comparisons, limitations, challenges, and future work opportunities.</p></abstract>
<kwd-group>
<kwd>memristor</kwd>
<kwd>integrated circuit</kwd>
<kwd>artificial neural network</kwd>
<kwd>spiking neural network</kwd>
<kwd>artificial intelligence</kwd>
</kwd-group>
<contract-num rid="cn001">61874158</contract-num>
<contract-num rid="cn001">61674050</contract-num>
<contract-sponsor id="cn001">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content></contract-sponsor>
<counts>
<fig-count count="10"/>
<table-count count="6"/>
<equation-count count="2"/>
<ref-count count="115"/>
<page-count count="14"/>
<word-count count="9998"/>
</counts>
</article-meta>
</front>
<body>

<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Resistance, capacitance and inductance are the three basic circuit components in passive circuit theory. In 1971, Professor Leon O. Chua of the University of California at Berkeley first described a basic circuit that relates flux to charge, called the missing fourth memristor element, and was successfully found by a team led by Stanley Williams at HP Labs in 2008 (Chua, <xref ref-type="bibr" rid="B21">1971</xref>; Strukov et al., <xref ref-type="bibr" rid="B75">2008</xref>). As a non-linear two-terminal passive electrical component, studies have shown that the conductance of a memristor is tunable by adjusting the amplitude, direction, or duration of its terminal voltages. Memristors have shown various outstanding properties, such as good compatibility with CMOS technology, small device area for high-density on-chip integration, non-volatility, fast speed, low power dissipation, and high scalability (Lee et al., <xref ref-type="bibr" rid="B48">2008</xref>; Waser et al., <xref ref-type="bibr" rid="B92">2009</xref>; Akinaga and Shima, <xref ref-type="bibr" rid="B3">2010</xref>; Wong et al., <xref ref-type="bibr" rid="B93">2012</xref>; Yang et al., <xref ref-type="bibr" rid="B108">2013</xref>; Choi et al., <xref ref-type="bibr" rid="B19">2014</xref>; Sun et al., <xref ref-type="bibr" rid="B78">2020</xref>; Wang et al., <xref ref-type="bibr" rid="B90">2020</xref>; Zhang et al., <xref ref-type="bibr" rid="B111">2020</xref>). Thus, although memristors took many years to transform from a purely theoretical derivation into a feasible implementation, these devices have been widely used in applications such as machine learning and neuromorphic computing, as well as non-volatile random-access memory (Alibart et al., <xref ref-type="bibr" rid="B5">2013</xref>; Liu et al., <xref ref-type="bibr" rid="B55">2013</xref>; Sarwar et al., <xref ref-type="bibr" rid="B69">2013</xref>; Fackenthal et al., <xref ref-type="bibr" rid="B24">2014</xref>; Prezioso et al., <xref ref-type="bibr" rid="B67">2015</xref>; Midya et al., <xref ref-type="bibr" rid="B60">2017</xref>; Yan et al., <xref ref-type="bibr" rid="B102">2017</xref>, <xref ref-type="bibr" rid="B99">2019b</xref>,<xref ref-type="bibr" rid="B104">d</xref>; Ambrogio et al., <xref ref-type="bibr" rid="B7">2018</xref>; Krestinskaya et al., <xref ref-type="bibr" rid="B44">2018</xref>; Li C. et al., <xref ref-type="bibr" rid="B49">2018</xref>, Li et al., <xref ref-type="bibr" rid="B50">2019</xref>; Wang et al., <xref ref-type="bibr" rid="B85">2018a</xref>, <xref ref-type="bibr" rid="B87">2019a</xref>,<xref ref-type="bibr" rid="B88">b</xref>; Upadhyay et al., <xref ref-type="bibr" rid="B81">2020</xref>). Furthermore, thanks to its powerful computing and storage capability, a memristor is a promising device for processing tremendous data and increasing the data processing efficiency in neural networks for artificial intelligence (AI) applications (Jeong and Shi, <xref ref-type="bibr" rid="B33">2018</xref>).</p>
<p>This article intends to analyze the memristor theory, models, circuits, and important applications in neural networks. The contents of this paper are organized as follows. Section Memristor Characteristics and Models introduces the memristor theory and models. Section Memristor-Based Neural Networks presents its applications in the second-generation neural networks, namely artificial neural networks (ANNs) and the third-generation neural networks, namely spiking neural networks (SNNs). Section Summary is the conclusions and future research direction.</p>
</sec>
<sec id="s2">
<title>Memristor Characteristics and Models</title>
<p>The relationship between the physical quantities (namely charge <italic>q</italic>, voltage <italic>v</italic>, flux &#x003C6;, and current <italic>i</italic>) and basic circuit elements (namely resistor <italic>R</italic>, capacitor <italic>C</italic>, inductor <italic>L</italic>, and memristor <italic>M</italic>) is shown in <xref ref-type="fig" rid="F1">Figure 1A</xref> (Chua, <xref ref-type="bibr" rid="B21">1971</xref>). Specifically, <italic>C</italic> defined as a linear relationship between voltage <italic>v</italic> and electric charge <italic>q</italic> (<italic>C</italic> = <italic>dq/dv</italic>), <italic>L</italic> is defined as a relationship between magnetic flux &#x003C6; and current <italic>i</italic> (<italic>L</italic> = <italic>d</italic>&#x003C6;<italic>/di</italic>), <italic>R</italic> is defined as a relationship between voltage <italic>v</italic> and current <italic>i</italic> (<italic>R</italic> = <italic>dv/di</italic>). The missing link between the electric charge and flux is defined as the memristor <italic>M</italic> and its differential equation is <italic>M</italic> = <italic>d</italic>&#x003C6;<italic>/dq</italic> or G = <italic>dq/d</italic>&#x003C6;. <xref ref-type="fig" rid="F1">Figure 1B</xref> shows the current-voltage characteristics of the memristor, where the pinched hysteresis loop is its fundamental identifier (Yan et al., <xref ref-type="bibr" rid="B105">2018c</xref>). As a basic element, the memristor I&#x02013;V curve cannot be obtained using <italic>R, C</italic>, and <italic>L</italic>. According to the shape of the pinched curve, it can be roughly classified into a digital type memristor or an analog type memristor. The resistance of a digital memristor exhibits an abrupt change at higher resistance ratios. The high-resistance and low-resistance states in a digital memristor have a long retention period, making it ideal for memory and logic operations. An analog memristor exhibits a gradual change in resistance. Therefore, it is more suitable for analog circuits and hardware-based multi-state neuromorphic system applications.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Basic theoretical circuit elements, and <bold>(B)</bold> pinched hysteresis I&#x02013;V loop of memristor.</p></caption>
<graphic xlink:href="fnano-03-645995-g0001.tif"/>
</fig>
<p>Memristor device technology and modeling research are the cornerstones of system applications. As shown in <xref ref-type="fig" rid="F2">Figure 2</xref>, top-level system applications (brain-machine interface, face or picture recognition, autonomous driving, IoT edge computing, big data analytics, and cloud computing) are built on the device technology and modeling. Memristor-based analog, digital, and memory circuits play a key role in the link between device materials and system applications. The main usage for bi-stable memristors is binary switches, binary memory, and digital logic circuits, while multi-state memristors are used as multi-bit memories, reconfigurable analog circuits, and neuromorphic circuits.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Memristor research and applications.</p></caption>
<graphic xlink:href="fnano-03-645995-g0002.tif"/>
</fig>
<p>Since the HP labs verified the nanoscale physical implementation, the physical behavior models of memristors have received a lot of attention. Accuracy, convergence, and computational efficiency are the most important factors in memristor models. These behavior models are expected to be simple, intuitive, better understood, and closed form. Up to date, various models have been developed, each with its unique advantages and shortcomings. The models listed in <xref ref-type="table" rid="T1">Table 1</xref> are the most popular models, including a linear ion drift memristor model, a non-linear ion drift memristor model, a Simmons tunnel barrier memristor model, a threshold adaptive memristor model (TEAM) (Simmons, <xref ref-type="bibr" rid="B73">1963</xref>; Strukov et al., <xref ref-type="bibr" rid="B75">2008</xref>; Biolek et al., <xref ref-type="bibr" rid="B8">2009</xref>; Pickett et al., <xref ref-type="bibr" rid="B66">2009</xref>; Kvatinsky et al., <xref ref-type="bibr" rid="B45">2012</xref>). In the linear ion drift memristor model, <italic>D</italic> and <italic>u</italic><sub><italic>v</italic></sub> represent the full length and device mobility of a memristor film, respectively. &#x003C9;(t) is a dynamic state variable whose value is limited between 0 and <italic>D</italic>, taking into account the size of the physical device. The low turn-on resistance <italic>R</italic><sub>on</sub> is the full doped resistance when dynamic variable &#x003C9;(t) is equal to <italic>D</italic>. The high turn-off resistance <italic>R</italic><sub><italic>off</italic></sub> is a fully undoped resistance when &#x003C9;(t) is equal to 0. Besides, a window function multiplied by a state variable is needed to nullify the derivative and provide a non-linear transition for the physical boundary simulation. Several window functions have been presented for modeling memristors such as Biolek, Strukov, Joglekar, and Prodromakis window functions (Strukov et al., <xref ref-type="bibr" rid="B75">2008</xref>; Biolek et al., <xref ref-type="bibr" rid="B8">2009</xref>; Joglekar and Wolf, <xref ref-type="bibr" rid="B37">2009</xref>; Strukov and Williams, <xref ref-type="bibr" rid="B76">2009</xref>; Prodromakis et al., <xref ref-type="bibr" rid="B68">2011</xref>). As the first memristor model, the linear ion drift model shows the features of simple, intuitive, and better understood. However, the state variable &#x003C9; modulation in nano-scale devices is not a linear process, and the memristor experimental results show non-linear I&#x02013;V characteristics. The non-linear ion drift model provides a better description of non-linear ionic transport and higher accuracy by experimentally fitting the parameters <italic>n</italic>, &#x003B2;, &#x003B1;, and &#x003C7; (Biolek et al., <xref ref-type="bibr" rid="B8">2009</xref>). But more physical reaction kinetics still need to be considered. The Simmons tunnel barrier model consists of a resistor in series with an electron tunnel barrier, which provides a more detailed representation of non-linear and asymmetrical features (Simmons, <xref ref-type="bibr" rid="B73">1963</xref>; Pickett et al., <xref ref-type="bibr" rid="B66">2009</xref>). There are nine fitting parameters in this segmentation model, which makes the mathematical model very complicated and computationally inefficient. The TEAM model can be thought of as a simplified version of the Simmons tunnel barrier model (Kvatinsky et al., <xref ref-type="bibr" rid="B45">2012</xref>). However, all of the above models suffer from smoothing problems or mathematical ill-posedness issues, and they cannot provide robust and predictable simulation results in DC, AC, transient analysis, not to mention complicated circuit analysis such as noise analysis and periodic steady-state analysis (Wang and Roychowdhury, <xref ref-type="bibr" rid="B84">2016</xref>). Therefore, in the face of transistor-level circuit design simulation, circuit designers usually have to replace the actual memristor with an emulator (Yang et al., <xref ref-type="bibr" rid="B106">2019</xref>). The emulator is a complex CMOS circuit used to simulate some performance aspect of a special memristor. An emulator is not a true model, and it is very different from the real memristor model (Yang et al., <xref ref-type="bibr" rid="B107">2014</xref>). Thus, it is urgent to establish a complete memristor model. Correct bias definition and right physical characteristics in SPICE or Verilog-a model are important for complex memristor circuit design. Otherwise, non-physical predictions will confuse circuit engineers in physical chip design.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Classic memristor models.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Models</bold></th>
<th valign="top" align="left"><bold>Linear ion drift</bold></th>
<th valign="top" align="left"><bold>Non-linear ion drift</bold></th>
<th valign="top" align="left"><bold>Simmons tunnel barrier</bold></th>
<th valign="top" align="left"><bold>TEAM</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">I&#x02013;V characteristic</td>
<td valign="top" align="left"><inline-formula><mml:math id="M1"><mml:mi>v</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Ron</mml:mtext></mml:mstyle><mml:mfrac><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>D</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mi>R</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>w</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>D</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left"><italic>i</italic>(<italic>t</italic>) &#x0003D; <italic>w</italic>(<italic>t</italic>)<sup><italic>n</italic></sup>&#x003B2;sinh(&#x003B1;&#x003C5;(<italic>t</italic>))&#x0002B;&#x003C7;[exp(&#x003B3;&#x003C5;(<italic>t</italic>))&#x02212;1]</td>
<td valign="top" align="left"><inline-formula><mml:math id="M2"><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">v</mml:mtext></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">t</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">Ron</mml:mtext></mml:mstyle><mml:mo>&#x0002B;</mml:mo><mml:mfrac><mml:mrow><mml:mi>R</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>-</mml:mo><mml:mi>R</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>W</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>-</mml:mo><mml:mi>W</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>w</mml:mi><mml:mo>-</mml:mo><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left"><inline-formula><mml:math id="M3"><mml:mi>v</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x025AA;</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>-</mml:mo><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>w</mml:mi><mml:mo>-</mml:mo><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td valign="top" align="left">State variable <inline-formula><mml:math id="M4"><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>w</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula></td>
<td valign="top" align="left"><inline-formula><mml:math id="M5"><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>D</mml:mi></mml:mrow></mml:mfrac><mml:mtext>&#x000A0;</mml:mtext></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td valign="top" align="left">a&#x000B7;<italic>f</italic> (<italic>w</italic>) <italic><bold>v</bold></italic><bold>(</bold><italic><bold>t</bold></italic><bold>)</bold><italic><sup><italic>m</italic></sup></italic></td>
<td valign="top" align="left"><inline-formula><mml:math id="M6"><mml:mtable columnalign="left"><mml:mtr columnalign="left"><mml:mtd columnalign="left"><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left"><mml:mtr columnalign="left"><mml:mtd columnalign="left"><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>-</mml:mo><mml:mo class="qopname">sinh</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>w</mml:mi><mml:mo>-</mml:mo><mml:mi>a</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:mi>i</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>i</mml:mi><mml:mo>&#x0003E;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>-</mml:mo><mml:mo class="qopname">sinh</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>w</mml:mi><mml:mo>-</mml:mo><mml:mi>a</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:mi>i</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>i</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></inline-formula></td>
<td valign="top" align="left"><inline-formula><mml:math id="M7"><mml:mrow><mml:mo>{</mml:mo><mml:mtable><mml:mtr><mml:mtd><mml:mi>k</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mn>0</mml:mn><mml:mo>&#x0003C;</mml:mo><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mi>i</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>k</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mn>0</mml:mn><mml:mo>&#x0003C;</mml:mo><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mo>&#x0003C;</mml:mo><mml:mi>i</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></inline-formula></td>
</tr>
<tr>
<td valign="top" align="left">Interval</td>
<td valign="top" align="left">0 &#x02264; <italic>w</italic> &#x02264; <italic>D</italic></td>
<td valign="top" align="left">0 &#x02264; <italic>w</italic> &#x02264; <italic>1</italic></td>
<td valign="top" align="left"><italic>aoff &#x02264; w &#x02264; aon</italic></td>
<td valign="top" align="left"><italic>aon &#x02264; w &#x02264; aoff</italic></td>
</tr>
<tr>
<td valign="top" align="left">Control</td>
<td valign="top" align="left">Current</td>
<td valign="top" align="left">Voltage-controlled</td>
<td valign="top" align="left">Current-controlled</td>
<td valign="top" align="left">Current-controlled</td>
</tr>
<tr>
<td valign="top" align="left">mechanism</td>
<td valign="top" align="left">controlled</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Accuracy</td>
<td valign="top" align="left">Lowest accuracy</td>
<td valign="top" align="left">Low accuracy</td>
<td valign="top" align="left">Highest accuracy</td>
<td valign="top" align="left">Sufficient accuracy</td>
</tr>
<tr>
<td valign="top" align="left">Thershold exists</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Pracitcally exists</td>
<td valign="top" align="left">Yes</td>
</tr>
<tr>
<td valign="top" align="left">Linearity</td>
<td valign="top" align="left">linear</td>
<td valign="top" align="left">No-linear</td>
<td valign="top" align="left">No-linear</td>
<td valign="top" align="left">No-linear</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3">
<title>Memristor-Based Neural Networks</title>
<sec>
<title>Neuron Biological Mechanisms and Memristive Synapse</title>
<p>The human brain can solve complex tasks, such as image recognition and data classification, more efficiently than traditional computers. The reason why a brain excels in complicated functions is the large number of neurons and synapses that process information in parallel. As shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, when an electrical signal is transmitted between two neurons via axon and synapse, the joint strength or weight is adjusted by the synapse. There are approximately 100 billion neurons in an entire human brain, each with about 10,000 synapses. Pre-synaptic and post-synaptic neurons transfer and receive the signal of excitatory and inhibitory post-synaptic potentials by updating synaptic weights. Long-term potentiation (LTP) and long-term depression (LTD) are important mechanisms in a biological nervous system, which indicates a deep-rooted transformation in the connection strengths between neurons. According to the interval between pre-synaptic and post-synaptic action potentials or spikes, the phenomenon of synaptic weight modification is known as spike-timing-dependent plasticity (STDP) (Yan et al., <xref ref-type="bibr" rid="B101">2018a</xref>, <xref ref-type="bibr" rid="B100">2019c</xref>). Due to scalability, low power operation, non-volatile features, and small on-chip area, memristors are good candidates for artificial synaptic devices to mimicking the LTP, LTD, and STDP behaviors (Jo et al., <xref ref-type="bibr" rid="B36">2010</xref>; Ohno et al., <xref ref-type="bibr" rid="B62">2011</xref>; Kim et al., <xref ref-type="bibr" rid="B41">2015</xref>; Wang et al., <xref ref-type="bibr" rid="B86">2017</xref>; Yan et al., <xref ref-type="bibr" rid="B102">2017</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Schematic of two interconnected neurons by synapses.</p></caption>
<graphic xlink:href="fnano-03-645995-g0003.tif"/>
</fig>
<p>There are some key requirements for memristive devices in neural network applications. For example, a wide range of resistance is required to enable sufficient resistance states; devices are required to have low resistance fluctuations and low device-to-device variability; a higher absolute resistance is required for low power dissipation; and high durability is required for reprogramming and training (Choi et al., <xref ref-type="bibr" rid="B18">2018</xref>; Yan et al., <xref ref-type="bibr" rid="B103">2018b</xref>, <xref ref-type="bibr" rid="B98">2019a</xref>; Xia and Yang, <xref ref-type="bibr" rid="B96">2019</xref>). A concern with device stability is resistance drift, which occurs over time or with the environment. Resistance drift causes undesirable changes in synapse weight and blurs different resistance states, ultimately affecting the accuracy of neural network computation (Xia and Yang, <xref ref-type="bibr" rid="B96">2019</xref>). To deal with this drift challenge, improvements can be made in three aspects: (1) material device engineering, (2) circuit design, and (3) system design (Alibart et al., <xref ref-type="bibr" rid="B4">2012</xref>; Choi et al., <xref ref-type="bibr" rid="B18">2018</xref>; Jiang et al., <xref ref-type="bibr" rid="B35">2018</xref>; Lastras-Monta&#x000F1;o and Cheng, <xref ref-type="bibr" rid="B46">2018</xref>; Yan et al., <xref ref-type="bibr" rid="B103">2018b</xref>, <xref ref-type="bibr" rid="B98">2019a</xref>; Zhao et al., <xref ref-type="bibr" rid="B113">2020</xref>). For example, as for the domain of material engineering, threading dislocations can be used to control programming variation and enhance switching uniformity (Choi et al., <xref ref-type="bibr" rid="B18">2018</xref>). In terms of circuit-level design, a module of two series memristors and a transistor with the smallest size can be used, thus, the resistance ratio of the memristor can be encoded to compensate for the resistance drift (Lastras-Monta&#x000F1;o and Cheng, <xref ref-type="bibr" rid="B46">2018</xref>). For the system-design level, device deviation can be reduced by protocols, such as closed loop peripheral circuit with a write-verify function (Alibart et al., <xref ref-type="bibr" rid="B4">2012</xref>). In order to obtain linear and symmetric weight update in LTP and LTD for efficient neural network training, optimized programming pulses can be used to excite memristors with either fixed-amplitude or fixed-width voltage pulses (Jiang et al., <xref ref-type="bibr" rid="B35">2018</xref>; Zhao et al., <xref ref-type="bibr" rid="B113">2020</xref>). Note it is inevitable to increase energy consumption if the memristor resistance value is changed through complex programmable pulses.</p>
<p>The comparison of different memristive synapse circuit structures is shown in <xref ref-type="table" rid="T2">Table 2</xref> (Kim et al., <xref ref-type="bibr" rid="B39">2011a</xref>; Wang et al., <xref ref-type="bibr" rid="B91">2014</xref>; Prezioso et al., <xref ref-type="bibr" rid="B67">2015</xref>; Hong et al., <xref ref-type="bibr" rid="B28">2019</xref>; Krestinskaya et al., <xref ref-type="bibr" rid="B43">2019</xref>). Single memristor synapse (1M) crossbar arrays in neural networks have the lowest complexity and low power dissipation. However, it suffers from sneak path problems and complex peripheral switch circuits. Synapses with two memristors (2M) have a more flexible weight range and better symmetric LTP and LTD, but the corresponding chip area will be doubled. A synapse with one memristor and one transistor (1M-1T) has the advantage of solving the sneak path problem, but it also occupies a large area in the large-scale integration of neural networks. A bridge synapse architecture with four memristors (4M) provides a bidirectional programming mechanism with a voltage input voltage output. Due to the significant on-chip area overhead, the 1M-1T and 4M synapses may not be applicable for large-scale neural networks.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Comparison of different structure memristive synapse circuit.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Synapses</bold></th>
<th valign="top" align="center"><bold>Structure</bold></th>
<th valign="top" align="center"><bold>Area(F2)</bold></th>
<th valign="top" align="center"><bold>Weight</bold></th>
<th valign="top" align="left"><bold>Weight</bold><break/> <bold>range</bold></th>
<th valign="top" align="left"><bold>Other features</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1M</td>
<td valign="top" align="center"><inline-graphic xlink:href="fnano-03-645995-i0001.tif"/></td>
<td valign="top" align="center">&#x02248;4</td>
<td valign="top" align="center">G</td>
<td valign="top" align="left">&#x0002B;</td>
<td valign="top" align="left">Lower power consumption;<break/> least complex;<break/> sneak path problem in neural network array</td>
</tr>
<tr>
<td valign="top" align="left">2M</td>
<td valign="top" align="center"><inline-graphic xlink:href="fnano-03-645995-i0002.tif"/></td>
<td valign="top" align="center">&#x02248;8</td>
<td valign="top" align="center">G<sup>&#x0002B;</sup>-G<sup>&#x02212;</sup></td>
<td valign="top" align="left">&#x0002B;,0, &#x02013;</td>
<td valign="top" align="left">Better symmetric between LTP and LTD;<break/> complex post-synaptic neurons</td>
</tr>
<tr>
<td valign="top" align="left">1M-1T</td>
<td valign="top" align="center"><inline-graphic xlink:href="fnano-03-645995-i0003.tif"/></td>
<td valign="top" align="center">&#x02248;24</td>
<td valign="top" align="center">G</td>
<td valign="top" align="left">&#x0002B;</td>
<td valign="top" align="left">Solution for sneak path problem with transistor switch;<break/> biggest size;<break/> transistor non-ideal effect</td>
</tr>
<tr>
<td valign="top" align="left">4M</td>
<td valign="top" align="center"><inline-graphic xlink:href="fnano-03-645995-i0004.tif"/></td>
<td valign="top" align="center">&#x02248;16</td>
<td valign="top" align="center"><inline-formula><mml:math id="M8"><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></inline-formula></td>
<td valign="top" align="left">&#x0002B;,0, &#x02013;</td>
<td valign="top" align="left">Voltage input voltage output;<break/> Bidirectional programming;<break/> bigger size</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Memristor-Based ANNs</title>
<p>The basic operations of classical hardware ANNs include multiplication, addition, and activation, which are accomplished by CMOS circuits such as GPUs. The weights are typically saved in SRAM or DRAM. Despite the scalability of CMOS circuits, they are still not enough for ANN applications. Furthermore, the SRAM cell size are too big to be integrated at high density. DRAM needs to be refreshed periodically to prevent data decay. Whether it is SRAM or DRAM, it often needs to interact with CMOS cores. No matter SRAM or DRAM, the data needs to be fetched by to the cache and register files of the digital processors before processing and returned through the same databus, leading to significant speed limit and large energy consumption, which is the main challenge for deep learning and big data applications (Xia and Yang, <xref ref-type="bibr" rid="B96">2019</xref>). Nowadays, ANNs feature for large number of computational parameters stored in memory compared to classical computation. For example, a two-layer 784-800-10 fully-connected deep neural network in the MNIST dataset has 635,200 interconnections. A state of the art keep neural network like Visual Geometry Group (VGG) has a few millions of parameters. These factors pose a huge challenge to the implementation of ANN hardware. The memristor&#x00027;s non-volatility, lower power consumption, lower parasitic capacitance, and reconfigureable resistance states, high speed, and adaptability lead to a key role in ANN applications (Xia and Yang, <xref ref-type="bibr" rid="B96">2019</xref>). An ANN is an information processing model which are derived from mathematical optimization. A typical ANN architecture and its memristor crossbar are shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. The system usually consists of three layers: an input layer, a middle layer or a hidden layer, and an output layer. The connected units or nodes are neurons which are usually series by weighted-sum module and activation function module. Neurons also perform tasks of decoding, control, and signal routing. Due to its powerful signal processing capability, CMOS analog and digital logic circuits are the best candidates for neurons hardware implementation. In <xref ref-type="fig" rid="F4">Figure 4</xref>, arrow or connecting lines represent synapses, and their weights represent the connection strengths between two neurons. Assume the weight modulation matrix W<sub>ij</sub> in a memristor synapse crossbar is a M &#x000D7; N dimensinal matrix, where i(i = 1, 2, &#x02026;, N) and j(i = 1, 2, &#x02026;, <italic>M</italic>) are the index numbers of the output and input ports of the memristor crossbar. W<sub><italic>ij</italic></sub> between pre-neuron input vector X<sub>j</sub> and post-neuron output vector Y<sub><italic>i</italic></sub> is a matrix-vector multiplication operation, expressed as Equation (1) (Jeong and Shi, <xref ref-type="bibr" rid="B33">2018</xref>).</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003A3;</mml:mi><mml:msub><mml:mrow><mml:mi>W</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E2"><label>(2)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x00394;</mml:mi><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>r</mml:mi><mml:mrow><mml:mo stretchy="true">[</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi>&#x02202;</mml:mi><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The matrix <italic>W</italic> can be continuously adjusted until the difference between the output value <italic>y</italic> and the target value y<sup>&#x0002A;</sup> is minimized. The Equation (2) shows the synaptic weight tunning process with the gradient of output error (y&#x02013;y<sup>&#x0002A;</sup>)<sup>2</sup> under a training rate (Huang et al., <xref ref-type="bibr" rid="B31">2018</xref>). Therefore, a memristor crossbar is equal to a CMOS adder plus a CMOS multiplier and an SRAM (Jeong and Shi, <xref ref-type="bibr" rid="B33">2018</xref>), because data are computed, stored, and regenerated on the same local device (i.e., a memristor itself). Besides, a crossbar can be vertically integrated into three dimensions (Seok et al., <xref ref-type="bibr" rid="B71">2014</xref>; Lin et al., <xref ref-type="bibr" rid="B53">2020</xref>; Luo et al., <xref ref-type="bibr" rid="B56">2020</xref>). In this way, it saves much chip area and power consumption. Due to the memristor synapse update and save weight data on itself, the memory wall problem with von Neumann bottleneck is solved.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Typical ANN architecture and its memristor crossbar.</p></caption>
<graphic xlink:href="fnano-03-645995-g0004.tif"/>
</fig>
<p>Researchers have developed various topologies and learning algorithms for software-based or hardware-based ANNs. <xref ref-type="table" rid="T3">Table 3</xref> provides a comparison of typical memristive ANNs, including single-layer perceptron (SLP) or multi-layer perceptron (MLP), CNN, cellular neural network (CeNN), and recurrent neural network (RNN). SLP and MLP are classic neural networks with well-known learning rules such as Hebbian learning, backpropagation. Although a lot of ANN studies have been verified by simulations or small-scale implementation, a single-layer neural network with 128 &#x000D7; 64 1M-1T Ta/HfO<sub>2</sub> memristor array has been experimentally demonstrated with an image recognition accuracy of 89.9% for the MNIST dataset (Hu et al., <xref ref-type="bibr" rid="B29">2018</xref>). CNNs (referred to as space-invariant or shift-invariant ANNs) are regularized versions of MLP. Their hidden layers usually contain multiple complex activation functions, and perform convolution or regional maximum value operations. Researchers have demonstrated an over 70% of accuracy in human behavior video recognition with a memristor-based 3D CNN (Liu et al., <xref ref-type="bibr" rid="B54">2020</xref>). It should be emphasized that this verification is only a software simulation result, while the on-chip hardware demonstration is still very challenging, especially for deep CNNs (Wang et al., <xref ref-type="bibr" rid="B87">2019a</xref>; Luo et al., <xref ref-type="bibr" rid="B56">2020</xref>; Yao et al., <xref ref-type="bibr" rid="B109">2020</xref>). CeNN is a massively parallel computing neural network, whose communication features are limited to between adjacent cell neurons. The cells are dissipative non-linear continuous-time or discrete-time processing units. Due to their dynamic processing capability and flexibility, CeNNs are promising candidates for real-time high frame rate processing or multi-target motion detection. For example, a CeNN with 4M memristive bridge circuit synapse has been proposed for image processing (Duan et al., <xref ref-type="bibr" rid="B23">2014</xref>). Unlike classic feed forward ANNs, RNNs have a feedback connection that enables temporal dynamic behavior. Therefore, it is suitable for speech recognition applications. Long short-term memory (LSTM) is a kind of useful RNN structure for deep learning. Hardware implementation of LSTM networks based on memristors have been reported (Smagulova et al., <xref ref-type="bibr" rid="B74">2018</xref>; Li et al., <xref ref-type="bibr" rid="B50">2019</xref>; Tsai et al., <xref ref-type="bibr" rid="B80">2019</xref>; Wang et al., <xref ref-type="bibr" rid="B87">2019a</xref>).</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Typical architectures of Memristive ANNs.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>TYPES</bold></th>
<th valign="top" align="left"><bold>Architecture</bold></th>
<th valign="top" align="left"><bold>Layers properties</bold></th>
<th valign="top" align="left"><bold>Applications</bold></th>
<th valign="top" align="left"><bold>Challenges</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">SLP/MLP</td>
<td valign="top" align="left">Input layer&#x0002B;hidden<break/> layer&#x0002B;output layer</td>
<td valign="top" align="left">Sigmoid, tanh, etc.,<break/> activation; Full-<break/>connections</td>
<td valign="top" align="left">Simple pattern classification; Hand-written letter recognition</td>
<td valign="top" align="left">Power dissipation in deep ANN;<break/> Overfitting; non-ideal memristor; Scalability</td>
</tr>
<tr>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">Input layer&#x0002B;<break/>Convolution layer&#x0002B;<break/>ReLu layer&#x0002B;Pooling<break/>&#x0002B;Fully-connected and output layer</td>
<td valign="top" align="left">Convolution;<break/> Pooling</td>
<td valign="top" align="left">Image classification;<break/> Face recognition; Video analysis</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">CeNN</td>
<td valign="top" align="left">Cell array with<break/> templates; 1-D, 2-D, or 3-D</td>
<td valign="top" align="left">Dissipative non-linear cells;<break/> Lyapunov function; Neighborhood<break/> communication</td>
<td valign="top" align="left">Image filtering;<break/> Signal processing; moving object<break/> detection</td>
<td valign="top" align="left">Convergence and<break/> mulitistability in non-symmetric networks; non-ideal<break/> memristor</td>
</tr>
<tr>
<td valign="top" align="left">RNN</td>
<td valign="top" align="left">Fully recurrent;<break/> Elman; Jordan; gated recurrent unit; long short-term memory</td>
<td valign="top" align="left">Temporal dynamic<break/> behavior; directed graph along a temporal sequence; LSTM</td>
<td valign="top" align="left">Speech recognition; Machine translation;<break/> Video processing</td>
<td valign="top" align="left">Hard to train for long term<break/> dependencies; non-ideal memristor</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Due to atomic-level random defects and variability in the conductance modulation process, non-ideal memristor characteristics are the main causes of learning accuracy loss in ANNs. This phenomenon is manifested in the following aspects of memristor: asymmetric non-linear weight change between potentiation and depression, limited ON/OFF weight ratio and device variation. <xref ref-type="table" rid="T4">Table 4</xref> shows the main strategies for how to deal with these issues. One can mitigate the effects of non-ideal memristor characteristics on ANN accuracy from four levels: device materials, circuits, architectures, and algorithms. At device materials level, switching uniformity and analog on/off ratio can be enhanced by optimizing redox reaction at the metal/oxide interface, adopting threading dislocations technology or heating element (Jeong et al., <xref ref-type="bibr" rid="B34">2015</xref>; Lee et al., <xref ref-type="bibr" rid="B47">2015</xref>; Tanikawa et al., <xref ref-type="bibr" rid="B79">2018</xref>). At circuits level, one can use customized excitation pulse or hybrid CMOS-memristor synapses to mitigate memristor non-ideal effects (Park et al., <xref ref-type="bibr" rid="B64">2013</xref>; Li et al., <xref ref-type="bibr" rid="B51">2016</xref>; Chang et al., <xref ref-type="bibr" rid="B12">2017</xref>; Li S. J. et al., <xref ref-type="bibr" rid="B52">2018</xref>; Woo and Yu, <xref ref-type="bibr" rid="B94">2018</xref>). At architectures level, common techniques are multiple memristors cell for high redundancy, pseudo-crossbar array, and peripheral circuit compensation (Chen et al., <xref ref-type="bibr" rid="B15">2015</xref>). Co-optimization between memristors and ANN algorithms is also reported (Li et al., <xref ref-type="bibr" rid="B51">2016</xref>). However, it should be noted that implementation of these strategies inevitably brings side effects, such as high manufacturing cost, large power consumption, large chip area, complex peripheral circuits, or inefficient algorithm. For example, the non-identical pulse excitation or bipolar-pulse-training methods improve the linearity and symmetry of memristor synapses, but it increases the complexity of peripheral circuits, system power consumption, and chip area. Therefore, trade-offs and co-optimization need to be made at each design level to improve the learning accuracy of ANNs (Gi et al., <xref ref-type="bibr" rid="B26">2018</xref>; Fu et al., <xref ref-type="bibr" rid="B25">2019</xref>). <xref ref-type="fig" rid="F5">Figure 5</xref> is a collaborative design example from bottom-level memristor devices to top-level training algorithms (Fu et al., <xref ref-type="bibr" rid="B25">2019</xref>). The conductance response (CR) curve of memristors is first measured to obtain its non-linearity factor. Then, the CR curve is divided into piecewise linear segments to obtain their slope, and the pulse width of the excitation pulse is inversely proportional to the slope. These data are stored in memory for comparison and correction by memristor crossbars during the update. Through this method, the ANN recognition accuracy is finally improved.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>ANNs learning accuracy improvement by mitigating memristor non-ideal effects.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Level</bold></th>
<th valign="top" align="left"><bold>Strategies</bold></th>
<th valign="top" align="left"><bold>Tradeoffs</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Device materials</td>
<td valign="top" align="left">Optimizing redox reaction at the metal/oxide interface (Lee et al., <xref ref-type="bibr" rid="B47">2015</xref>), Threading dislocations technology (Tanikawa et al., <xref ref-type="bibr" rid="B79">2018</xref>), <break/> Heating element, selectively enhanced filament expansion stage (Jeong et al., <xref ref-type="bibr" rid="B34">2015</xref>)</td>
<td valign="top" align="left">Manufacturing cost; <break/> Power consumption; On-chip area; <break/> Peripheral circuit complexity; <break/> Algorithm efficiency</td>
</tr>
<tr>
<td valign="top" align="left">Circuits</td>
<td valign="top" align="left">Hybrid CMOS-memristor Neuromorphic Synapse, 1R&#x0002B;1M1R for better device symmetry (Woo and Yu, <xref ref-type="bibr" rid="B94">2018</xref>), <break/> Non-identical pulse excitation (Park et al., <xref ref-type="bibr" rid="B64">2013</xref>; Chang et al., <xref ref-type="bibr" rid="B12">2017</xref>), <break/> Bipolar-pulse-training (Li et al., <xref ref-type="bibr" rid="B51">2016</xref>), <break/> Spike edge shape design (Li S. J. et al., <xref ref-type="bibr" rid="B52">2018</xref>)</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Architectures</td>
<td valign="top" align="left">Multiple memristors cell for high redundancy (Chen et al., <xref ref-type="bibr" rid="B15">2015</xref>), <break/> Pseudo-crossbar array, peripheral circuit compensation (Chen et al., <xref ref-type="bibr" rid="B15">2015</xref>)</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Algorithms</td>
<td valign="top" align="left">co-optimization between memristors and ANN algorithms (Li et al., <xref ref-type="bibr" rid="B51">2016</xref>)</td>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Co-design from memristor non-ideal characteristics to the ANN algorithm (Fu et al., <xref ref-type="bibr" rid="B25">2019</xref>).</p></caption>
<graphic xlink:href="fnano-03-645995-g0005.tif"/>
</fig>
<p>The memristor-based ANN applications can be software, hardware or hybrid (Kozhevnikov and Krasilich, <xref ref-type="bibr" rid="B42">2016</xref>). Software networks tend to be more accurate than their hardware counterparts because they do not have the analog element non-uniformity issues. However, hardware networks feature better speed and less power consumption due to non-von Neumann architectures (Kozhevnikov and Krasilich, <xref ref-type="bibr" rid="B42">2016</xref>). In <xref ref-type="fig" rid="F6">Figure 6</xref>, a deep neuromorphic accelerator ANN chip with 2.4 million Al<sub>2</sub>O<sub>3</sub>/TiO<sub>2</sub>-xmemristors was designed and fabricated (Kataeva et al., <xref ref-type="bibr" rid="B38">2019</xref>). This memristor chip consists of a 24 &#x000D7; 43 array with a 48 &#x000D7; 48 memristor crossbar at each intersection, which means its complexity is about 1,000 times higher than previous designs in the literature. This work is a good starting point for the operation of medium-scale memristor ANNs. Similar accelerators have appeared in the last 2 years (Cai et al., <xref ref-type="bibr" rid="B10">2019</xref>; Chen W.-H. et al., <xref ref-type="bibr" rid="B16">2019</xref>; Xue et al., <xref ref-type="bibr" rid="B97">2020</xref>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>A deep neuromorphic ANN chip with 2.4 million memristor devices (Kataeva et al., <xref ref-type="bibr" rid="B38">2019</xref>).</p></caption>
<graphic xlink:href="fnano-03-645995-g0006.tif"/>
</fig>
<p>Memristive neural networks can be used to understand human emotion and simulate human operational abilities (Bishop, <xref ref-type="bibr" rid="B9">1995</xref>). The well-known PavlTov associative memory experiment has been implemented in memristive ANNs with a novel weighted-input-feedback learning method (Ma et al., <xref ref-type="bibr" rid="B57">2018</xref>). As more input signals, neurons, and memristor synapses, complex emotional processing will be achieved in further AI chips. Due to the material challenge and the lack of effective models, most of the demonstrations are limited to small-scale simulations for simple tasks. The shortcomings of memristors are mainly the non-linearity, asymmetry, and variability, which seriously affect the accuracy of ANNs. Moreover, the peripheral circuits and interface must provide superior energy efficiency and data throughput.</p>
</sec>
<sec>
<title>Memristor-Based SNN</title>
<p>Inspired by cognitive and computational methods of animal brains, the third-generation neural network, SNN, makes desirable properties of compact biological neurons mimic and remarkable cognitive performance. The most prominent feature of SNN is that it incorporates the concept of time into operations with discrete values, while the input and output values of the second-generation ANNs are continuous. SNN can better leverage the strength of biological paradigm of information processing, thanks to the hardware emulation of synapses and neurons. ANN is calculated layer by layer, which is relatively simple. However, spike trains in SNN are relatively difficult to understand and efficient coding methods for these spike trains are not easy. These dynamic events driven spikes in SNN enhance the ability to process spatio-temporal or real-world sensory data, with fast adaptation and exponential memorization. The combination of spatio-temporal data allows SNN to process signals naturally and efficiently.</p>
<p>Neuron models, learning rules, and external stimulus coding are key research areas of SNN. The Hodgkin &#x00026; Huxley (HH) model, leaky Integrate-and-Fire (LIF) model, spike response model (SRM), and Izhikevich model are the most common models of neurons (Hodgkin and Huxley, <xref ref-type="bibr" rid="B27">1952</xref>; Chua, <xref ref-type="bibr" rid="B22">2013</xref>; Ahmed et al., <xref ref-type="bibr" rid="B2">2014</xref>; Pfeiffer and Pfeil, <xref ref-type="bibr" rid="B65">2018</xref>; Wang and Yan, <xref ref-type="bibr" rid="B83">2019</xref>; Zhao et al., <xref ref-type="bibr" rid="B112">2019</xref>; Ojiugwo et al., <xref ref-type="bibr" rid="B63">2020</xref>). The HH model is a continuous-time mathematical model based on conductance. Although this model is based on the study of squid, it is widely used in lower or higher organisms (even humans being). However, since complex non-linear differential equations are set with four variables, this model is difficult to achieve high accuracy. Chua established the memristor model of Hodgkin-Huxley neurons and proved that memristors can be applied to the imitation of complex neurobiology (Chua, <xref ref-type="bibr" rid="B22">2013</xref>). The Izhikevich model integrates the bio-plasticity of HH model with simplicity and higher computational efficiency. The HH and Izhikevich models are calculated by differential equations, while the LIF and SRM models are computed by an integral method. SRM is an extended version of LIF, and the Izhikevich model can be considered as a simplified version of the Hodgkin-Huxley model. These mathematical models are the results of different degrees of customization, trade-offs and biological neural network optimization. <xref ref-type="table" rid="T5">Table 5</xref> shows a comparison of several memristor-based SNNs. It can be seen that these SNN studies are based on STDP learning rules and LIF neurons. Most of them are still in simple pattern recognition applications, only a few of which have hardware implementations.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Comparison of several memristor-based SNNs.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Neuron</bold></th>
<th valign="top" align="left"><bold>Synapse</bold></th>
<th valign="top" align="left"><bold>Learning rules</bold></th>
<th valign="top" align="center"><bold>Size</bold></th>
<th valign="top" align="left"><bold>Applications</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Zheng and Mazumder (<xref ref-type="bibr" rid="B114">2018</xref>)</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">1M1R fixed-polarity<break/> memristor</td>
<td valign="top" align="left">STDP;<break/> Supervised learning</td>
<td valign="top" align="center">784-300-<break/>10</td>
<td valign="top" align="left">Handwritten digits<break/> recognition</td>
</tr>
<tr>
<td valign="top" align="left">Chen B. et al. (<xref ref-type="bibr" rid="B14">2019</xref>)</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">Lithium silicate<break/> memristor</td>
<td valign="top" align="left">STDP, Unsupervised learning, WTA</td>
<td valign="top" align="center">128-128-<break/>12</td>
<td valign="top" align="left">Motion-style<break/> recognition</td>
</tr>
<tr>
<td valign="top" align="left">Shukla and Ganguly (<xref ref-type="bibr" rid="B72">2018</xref>)</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">HfO<sub>2</sub> memristor</td>
<td valign="top" align="left">STDP;<break/> Supervised Hebbian</td>
<td valign="top" align="center">16-3</td>
<td valign="top" align="left">Classification problems, Fisher Iris dataset, etc.</td>
</tr>
<tr>
<td valign="top" align="left">Wu and Saxena (<xref ref-type="bibr" rid="B95">2018</xref>)</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">Stochastic binary<break/> memristor</td>
<td valign="top" align="left">STDP, Dendritic-inspired processing</td>
<td valign="top" align="center">1-4</td>
<td valign="top" align="left">Pattern Recognition</td>
</tr>
<tr>
<td valign="top" align="left">Chu et al. (<xref ref-type="bibr" rid="B20">2014</xref>)</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">Pr<sub>0.7</sub><break/>Ca<sub>0.3</sub>MnO<sub>3</sub>-<break/>memristor</td>
<td valign="top" align="left">STDP, Unsupervised learning</td>
<td valign="top" align="center">30-10</td>
<td valign="top" align="left">Visual Pattern<break/> Recognition</td>
</tr>
<tr>
<td valign="top" align="left">Volos et al. (<xref ref-type="bibr" rid="B82">2015</xref>)</td>
<td valign="top" align="left">H-R,<break/> FHN</td>
<td valign="top" align="left">Flux-controlled<break/> memristor</td>
<td valign="top" align="left">STDP</td>
<td valign="top" align="center">2</td>
<td valign="top" align="left">Chaotic oscillators;<break/> Neurodynamic behavior</td>
</tr>
<tr>
<td valign="top" align="left">Al-Shedivat et al. (<xref ref-type="bibr" rid="B6">2015</xref>)</td>
<td valign="top" align="left">SRM</td>
<td valign="top" align="left">Stochastic biolek&#x00027;s memristor model</td>
<td valign="top" align="left">STDP, WTA</td>
<td valign="top" align="center">1568-32</td>
<td valign="top" align="left">Handwritten digits<break/> recognition</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The salient features of SNNs are as follows. First, biological neuron models (e.g., HH, LIF) are closer to biological neurons than neurons of ANN. Second, the transmitted information is time or frequency encoded discrete-time spikes, which can contain more information than traditional networks. Third, each neuron can work alone and enter a low power standby mode when there is no input signal. Since SNNs have been proven to be more powerful than ANNs in theory, it is natural to widely use SNNs. Since the spike training cannot be differentiated, the gradient descent method cannot be used to train SNNs without losing accurate temporal information. Another problem is that it takes a lot of computation to simulate SNNs on normal hardware, because it requires analog differential equations (Ojiugwo et al., <xref ref-type="bibr" rid="B63">2020</xref>). Due to the complexity of SNNs, efficient learning rules that meet the characteristics of biological neural networks have not been discovered. This rule is required to model not only synaptic connectivity but also its growth and attenuation. Another challenge is the discontinuous nature of spike sequence, which makes many classic ANN learning rules unsuitable for SNNs, or can only be approximated, because the convergence problem is very serious. Meanwhile, many SNNs studies are limited to theoretical analysis and simulation of simple tasks rather than complex and intelligent tasks (e.g., multiple regression analysis, deductive and inductive reasoning, and their chip implementation) (Wang and Yan, <xref ref-type="bibr" rid="B83">2019</xref>). Although the future of SNNs is still unclear, many researchers believe that SNNs will replace deep ANNs. The reason is that AI is essentially a biological brain mimicking process, and SNNs can provide a perfect mechanism for unsupervised learning.</p>
<p>As shown in <xref ref-type="fig" rid="F7">Figure 7</xref>, a neural network is implemented with CMOS neurons, CMOS control circuits, and memristor synapses (Sun, <xref ref-type="bibr" rid="B77">2015</xref>). The aggregation module, leaky integrate and fire module are equivalent to the role of dendrites and axon hillocks, respectively. Input neurons signals are temporally and spatially summed through a common-drain aggregation amplifier circuit. A memristor synapse gives the action potential signal a weight and its output signal, that is, a post-synaptic potential signal is transmitted to post-neurons. Using the action potential signal and feedback signals from post-neurons, the control circuit and synaptic update phase provide potentiation or depression signals to memristor synapses. According to the STDP learning rules, the transistor-level weight adjustment circuit is composed of a memristor device and CMOS transmission gates. The transmission gates are controlled by potentiation or depression signals. The system is very similar to the main features of biological neurons, which is useful for neuromorphic SNN hardware implementation. A more complete description of SNN circuits and system applications is shown in <xref ref-type="fig" rid="F8">Figure 8</xref> (Wu and Saxena, <xref ref-type="bibr" rid="B95">2018</xref>). The system consists of event-driven CMOS neurons, a competitive neural coding algorithm [i.e., winner take all (WTA) learning rule], and multi-bit memristor synapse array. A stochastic non-linear STDP learning rule with an exponential shaped window learning function is adopted to update memristor synapse weights <italic>in situ</italic>. The amplitude and additional temporal delay of the half rectangular half-triangular spike waveform can be adjusted for dendritic-inspired processing. This work demonstrates the feasibility and excellence of emerging memristor devices in neuromorphic applications, with low power consumption and compact on-chip area.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>CMOS neuron and memristor synapse weight update circuit (Sun, <xref ref-type="bibr" rid="B77">2015</xref>).</p></caption>
<graphic xlink:href="fnano-03-645995-g0007.tif"/>
</fig>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>CMOS-Memristor SNN (Wu and Saxena, <xref ref-type="bibr" rid="B95">2018</xref>).</p></caption>
<graphic xlink:href="fnano-03-645995-g0008.tif"/>
</fig>
<p>Despite the large on-chip area and power dissipation in CMOS implementation of synaptic circuits (Chicca et al., <xref ref-type="bibr" rid="B17">2003</xref>; Seo et al., <xref ref-type="bibr" rid="B70">2011</xref>), Myonglae Chu adopted Pr<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub>-based memristor synaptic array and CMOS leaky IAF neurons in SNN. As shown in <xref ref-type="fig" rid="F9">Figure 9</xref>, the SNN chip has been successfully developed for visual pattern recognition with modified STDP learning rules. The SNN hardware system includes 30 &#x000D7; 10 neurons and 300 memristor synapses. Although this hardware system only recognizes numbers 0&#x02013;9, it is a good attempt, as most SNN studies have lingered around the software simulation phase (Kim et al., <xref ref-type="bibr" rid="B40">2011b</xref>; Adhikari et al., <xref ref-type="bibr" rid="B1">2012</xref>; Cantley et al., <xref ref-type="bibr" rid="B11">2012</xref>). One can refer to literatures (Wang et al., <xref ref-type="bibr" rid="B89">2018b</xref>; Ishii et al., <xref ref-type="bibr" rid="B32">2019</xref>; Midya et al., <xref ref-type="bibr" rid="B59">2019b</xref>) for more experimental memristor-SNN demos.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>A memristor synapse array micrograph for SNN Application (Chu et al., <xref ref-type="bibr" rid="B20">2014</xref>).</p></caption>
<graphic xlink:href="fnano-03-645995-g0009.tif"/>
</fig>
</sec>
<sec>
<title>Comparison Between ANNs and SNNs</title>
<p>A comparison between ANNs and SNNs is shown in <xref ref-type="table" rid="T6">Table 6</xref> (Nenadic and Ghosh, <xref ref-type="bibr" rid="B61">2001</xref>; Chaturvedi and Khurshid, <xref ref-type="bibr" rid="B13">2011</xref>; Zhang et al., <xref ref-type="bibr" rid="B111">2020</xref>). Traditional ANNs require layer-by-layer computation. Therefore, it is computationally intensive and has a relatively large power consumption. An SNN changes from a standby mode to a working mode, when a large nerve spike is coming with its spike threshold exceeding the membrane voltage. As a result, its system power consumption is relatively low.</p>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p>Comparison between ANNs and SNNs.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="left"><bold>ANNs</bold></th>
<th valign="top" align="left"><bold>SNNs</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Generation</td>
<td valign="top" align="left">Second-generation NN</td>
<td valign="top" align="left">Third-generation NN</td>
</tr>
<tr>
<td valign="top" align="left">Biological brain mimicking</td>
<td valign="top" align="left">General</td>
<td valign="top" align="left">Better</td>
</tr>
<tr>
<td valign="top" align="left">Signal processing</td>
<td valign="top" align="left">Continuous multi-level value</td>
<td valign="top" align="left">Sparse and asynchronous binary time-domain coded spike signals. Event-driven discrete information processing</td>
</tr>
<tr>
<td valign="top" align="left">Energy efficiency</td>
<td valign="top" align="left">General</td>
<td valign="top" align="left">Better</td>
</tr>
<tr>
<td valign="top" align="left">Neurons and Synapses</td>
<td valign="top" align="left">Activation functions;</td>
<td valign="top" align="left">Hodgkin and Huxley, LIF, etc.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Digital or analog memristor synapses</td>
<td valign="top" align="left">Analog memristor synapses</td>
</tr>
<tr>
<td valign="top" align="left">Classical algorithms</td>
<td valign="top" align="left">Error-backpropagation</td>
<td valign="top" align="left">SpikeProp, STDP</td>
</tr>
<tr>
<td valign="top" align="left">Chip design</td>
<td valign="top" align="left">In progress with some achievement.</td>
<td valign="top" align="left">Preliminary stage</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Near-term application goals</td>
<td valign="top" align="left">Long-term application goals</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>SNNs with higher bio-similarity are expected to achieve higher energy efficiency than ANNs. But SNN hardware is harder to implement than ANN hardware. Thus, combining the advantages of ANN and SNN and using ANN-SNN converters to improve SNN performance is a valuable method, which has been experimentally demonstrated (Midya et al., <xref ref-type="bibr" rid="B58">2019a</xref>). The first and second layers of a converter are ordinary ANN structures. The output signals of the second layer are converted to a spike sequence for a 32 &#x000D7; 1 1M-1T drift memristor synapse array at the third layer. This ANN-SNN converter may be a good way for SNN hardware implementation. Despite the enormous potential of SNNs, there is currently no fully satisfactory general learning rules and its computational capability has not been demonstrated. Most of these methods lack comparability and generality. Compared to ANNs, the study of dynamic devices and efficient algorithms in SNNs is very challenging. SNNs only need to compute the activated connections, rather than all connections at every time step in ANNs. However, the encoding and decoding of spikes is one of the challenges in SNN research. In fact, it needs further research in neuroscience. ANN is the recent target of memristors, and SNN is the long-term goal in the future.</p>
<p>For neural networks applications, ANN and SNN memristor grids have some common challenges, such as sneak path problems, IR-drop or ohmic drop, grid latency, and grid power dissipation, as shown as <xref ref-type="fig" rid="F10">Figure 10</xref> (Zidan et al., <xref ref-type="bibr" rid="B115">2013</xref>; Hu et al., <xref ref-type="bibr" rid="B30">2014</xref>, <xref ref-type="bibr" rid="B29">2018</xref>; Zhang et al., <xref ref-type="bibr" rid="B110">2017</xref>). The large the size of the memristor array, the greater the effect of these parasitic capacitances and resistances. In Figure, the desired weight-update path is the dot-and-dash line, and the sneak path is the dotted line, which is an undesired parallel memristor path due to its relative resistance and non-gated memristor elements. This phenomenon leads to undesired weight changes and a reduction in the accuracy of neural networks. The basic solution for the sneak path is to add a series of connected gate-controlled MOS transistors to memristors as mentioned in <xref ref-type="table" rid="T2">Table 2</xref>. However, this method will lead to large on-chip synapse array and destroy the advantages of high-density integration of memristors. Grounding an unselected memristor array is another solution without the need to add synaptic area. But this approach leads to more power consumption. There are other techniques such as grounding line, floating line, additional bias, a non-unity aspect ratio of memristor arrays, three-electrode memristor devices. They may be welcome in memristor memory applications, but not necessarily in memristor-based neural network applications (Zidan et al., <xref ref-type="bibr" rid="B115">2013</xref>). In neural network applications, the main concern for memristor arrays is whether the association between input and output signals is correct (Hu et al., <xref ref-type="bibr" rid="B30">2014</xref>). This is one important difference compared to memristor memory applications. IR-drop, memristor grid latency, and power consumption are signal integrity effects caused by grid parasitic resistance Rpar and parasitic capacitance Cpar. These non-ideal factors affect the potential distribution, signal transmission, and ultimately affect the scale of memristor arrays. Similar to CMOS layout and routing techniques, large-scale memristors mesh can be divided into medium-sized modules with high-speed main signal paths for lower parasitic resistance, grid power consumption, and latency. It is worth noting that memristor process variations, gird IR-drop and noise can worsen the sneak path problem.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Sneak path, IR-drop, latency, and energy in massive memristor grids of neural networks.</p></caption>
<graphic xlink:href="fnano-03-645995-g0010.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<title>Summary</title>
<p>The advantage of memristors in neural network applications is their fast processing time and energy efficiency in the computational process. At the device level, memristors have very low power dissipation and high on-chip density. At the architecture level, parallel computing is performed at the same location where data is stored, thereby avoiding frequent data movement and memory wall issues. Due to the quantum effect and non-ideal characteristics in the manufacturing of nanometer memristors, the robust performance of memristor neural networks still needs to be improved. Meanwhile, the adaptation range of various memristor models is limited and has not been fully explored in chip design. To date, there are no complete unified memristor models for chip designer. Furthermore, wire resistance, sneak path current, and half-select problems are also challenges for high-density integration of memristor crossbar arrays. Memristor neural network research involves engineering, biology, physics, algorithms, architecture, systems, circuits, equipment, and materials. There is still a long way to go for memristive neural networks, as most research remains in single devices or small-scale prototypes. However, with the marketing promotion of the IoT big data and AI, the breakthrough research of memristor-based ANN will be realized by the joint efforts of academia and industry.</p>
</sec>
<sec id="s5">
<title>Author Contributions</title>
<p>WX drafted the manuscript, developed the concept, and conceived the experiments. JW revised the manuscript. XY drafted and revised the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adhikari</surname> <given-names>S. P.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Chua</surname> <given-names>L. O.</given-names></name></person-group> (<year>2012</year>). <article-title>Memristor bridge synapse-based neural network and its learning. <italic>IEEE. Trans. Neur. Netw. Learn</italic></article-title>. <source>Syst.</source> <volume>23</volume>, <fpage>1426</fpage>&#x02013;<lpage>1435</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2012.2204770</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>F. Y.</given-names></name> <name><surname>Yusob</surname> <given-names>B.</given-names></name> <name><surname>Hamed</surname> <given-names>H. N. A.</given-names></name></person-group> (<year>2014</year>). <source>Computing with spiking neuron networks: a review. <italic>Int. J. Adv. Soft Comput. Appl</italic>. 6</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.researchgate.net/publication/262374523_Computing_with_Spiking_Neuron_Networks_A_Review">https://www.researchgate.net/publication/262374523_Computing_with_Spiking_Neuron_Networks_A_Review</ext-link></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akinaga</surname> <given-names>H.</given-names></name> <name><surname>Shima</surname> <given-names>H.</given-names></name></person-group> (<year>2010</year>). <article-title>Resistive random access memory (ReRAM) based on metal oxides</article-title>. <source>Proc. IEEE</source> <volume>98</volume>, <fpage>2237</fpage>&#x02013;<lpage>2251</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2010.2070830</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alibart</surname> <given-names>F.</given-names></name> <name><surname>Gao</surname> <given-names>L.</given-names></name> <name><surname>Hoskins</surname> <given-names>B. D.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name></person-group> (<year>2012</year>). <article-title>High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm</article-title>. <source>Nanotechnology</source> <volume>23</volume>:<fpage>075201</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/23/7/075201</pub-id><pub-id pub-id-type="pmid">22260949</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alibart</surname> <given-names>F.</given-names></name> <name><surname>Zamanidoost</surname> <given-names>E.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name></person-group> (<year>2013</year>). <article-title>Pattern classification by memristive crossbar circuits using <italic>ex situ</italic> and <italic>in situ</italic> training</article-title>. <source>Nat. Commun.</source> <volume>4</volume>:<fpage>2072</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms3072</pub-id><pub-id pub-id-type="pmid">23797631</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Shedivat</surname> <given-names>M.</given-names></name> <name><surname>Naous</surname> <given-names>R.</given-names></name> <name><surname>Cauwenberghs</surname> <given-names>G.</given-names></name> <name><surname>Salama</surname> <given-names>K. N.</given-names></name></person-group> (<year>2015</year>). <article-title>Memristors empower spiking neurons with stochasticity. <italic>IEEE. J. Emerg. Select. Top. Circ</italic></article-title>. <source>Syst.</source> <volume>5</volume>, <fpage>242</fpage>&#x02013;<lpage>253</lpage>. <pub-id pub-id-type="doi">10.1109/JETCAS.2015.2435512</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Narayanan</surname> <given-names>P.</given-names></name> <name><surname>Tsai</surname> <given-names>H.</given-names></name> <name><surname>Shelby</surname> <given-names>R. M.</given-names></name> <name><surname>Boybat</surname> <given-names>I.</given-names></name> <name><surname>Di Nolfo</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Equivalent-accuracy accelerated neural-network training using analogue memory</article-title>. <source>Nature</source> <volume>558</volume>, <fpage>60</fpage>&#x02013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-018-0180-5</pub-id><pub-id pub-id-type="pmid">29875487</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Biolek</surname> <given-names>Z.</given-names></name> <name><surname>Biolek</surname> <given-names>D.</given-names></name> <name><surname>Biolkova</surname> <given-names>V.</given-names></name></person-group> (<year>2009</year>). <article-title>SPICE model of memristor with nonlinear dopant drift</article-title>. <source>Radioengineering</source> <volume>18</volume>, <fpage>210</fpage>&#x02013;<lpage>214</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.researchgate.net/publication/26625012_SPICE_Model_of_Memristor_with_Nonlinear_Dopant_Drift">https://www.researchgate.net/publication/26625012_SPICE_Model_of_Memristor_with_Nonlinear_Dopant_Drift</ext-link></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bishop</surname> <given-names>C. M.</given-names></name></person-group> (<year>1995</year>). <source>Neural Networks for Pattern Recognition</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cai</surname> <given-names>F.</given-names></name> <name><surname>Correll</surname> <given-names>J. M.</given-names></name> <name><surname>Lee</surname> <given-names>S. H.</given-names></name> <name><surname>Lim</surname> <given-names>Y.</given-names></name> <name><surname>Bothra</surname> <given-names>V.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>A fully integrated reprogrammable memristor&#x02013;CMOS system for efficient multiply&#x02013;accumulate operations</article-title>. <source>Nat. Electron</source>. <volume>2</volume>, <fpage>290</fpage>&#x02013;<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-019-0270-x</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cantley</surname> <given-names>K. D.</given-names></name> <name><surname>Subramaniam</surname> <given-names>A.</given-names></name> <name><surname>Stiegler</surname> <given-names>H. J.</given-names></name> <name><surname>Chapman</surname> <given-names>R. A.</given-names></name> <name><surname>Vogel</surname> <given-names>E. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Neural learning circuits utilizing nano-crystalline silicon transistors and memristors</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>23</volume>, <fpage>565</fpage>&#x02013;<lpage>573</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2012.2184801</pub-id><pub-id pub-id-type="pmid">24805040</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chang</surname> <given-names>C. C.</given-names></name> <name><surname>Chen</surname> <given-names>P. C.</given-names></name> <name><surname>Chou</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>I. T.</given-names></name> <name><surname>Hudec</surname> <given-names>B.</given-names></name> <name><surname>Chang</surname> <given-names>C. C.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Mitigating asymmetric nonlinear weight update effects in hardware neural network based on analog resistive synapse. <italic>IEEE. J. Emerg. Select. Top. Circ</italic></article-title>. <source>Syst.</source> <volume>8</volume>, <fpage>116</fpage>&#x02013;<lpage>124</lpage>. <pub-id pub-id-type="doi">10.1109/JETCAS.2017.2771529</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chaturvedi</surname> <given-names>S.</given-names></name> <name><surname>Khurshid</surname> <given-names>A. A.</given-names></name></person-group> (<year>2011</year>). <article-title>Review of spiking neural network architecture for feature extraction and dimensionality reduction</article-title>, in <source>2011 Fourth International Conference on Emerging Trends in Engineering and Technology</source> (<publisher-loc>Port Louis</publisher-loc>), <fpage>317</fpage>&#x02013;<lpage>322</lpage>. <pub-id pub-id-type="doi">10.1109/ICETET.2011.57</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>B.</given-names></name> <name><surname>Yang</surname> <given-names>H.</given-names></name> <name><surname>Zhuge</surname> <given-names>F.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Chang</surname> <given-names>T. C.</given-names></name> <name><surname>He</surname> <given-names>Y. H.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Optimal tuning of memristor conductance variation in spiking neural networks for online unsupervised learning</article-title>. <source>IEEE. Trans. Electron. Dev</source>. <volume>66</volume>, <fpage>2844</fpage>&#x02013;<lpage>2849</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2019.2907541</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>P. Y.</given-names></name> <name><surname>Lin</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>I. T.</given-names></name> <name><surname>Hou</surname> <given-names>T. H.</given-names></name> <name><surname>Ye</surname> <given-names>J.</given-names></name> <name><surname>Vrudhula</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Mitigating effects of non-ideal synaptic device characteristics for on-chip learning</article-title>, in <source>IEEE/ACM International Conference on Computer-Aided Design (ICCAD)</source> (<publisher-loc>Austin, TX</publisher-loc>), <fpage>194</fpage>&#x02013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1109/ICCAD.2015.7372570</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>W.-H.</given-names></name> <name><surname>Dou</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>K.-X.</given-names></name> <name><surname>Lin</surname> <given-names>W.-Y.</given-names></name> <name><surname>Li</surname> <given-names>P.-Y.</given-names></name> <name><surname>Huang</surname> <given-names>J.-H.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors</article-title>. <source>Nat. Electron</source>. <volume>2</volume>, <fpage>420</fpage>&#x02013;<lpage>428</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-019-0288-0</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chicca</surname> <given-names>E.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Douglas</surname> <given-names>R.</given-names></name></person-group> (<year>2003</year>). <article-title>An adaptive silicon synapse</article-title>, in <source>Proceedings of the 2003 International Symposium on Circuits and Systems (ISCAS&#x00027;03)</source> (<publisher-loc>Bangkok</publisher-loc>), I&#x02013;I. <pub-id pub-id-type="doi">10.1109/ISCAS.2003.1205505</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>S.</given-names></name> <name><surname>Tan</surname> <given-names>S. H.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Choi</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>P. Y.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations</article-title>. <source>Nat. Mater</source>. <volume>17</volume>, <fpage>335</fpage>&#x02013;<lpage>340</lpage>. <pub-id pub-id-type="doi">10.1038/s41563-017-0001-5</pub-id><pub-id pub-id-type="pmid">29358642</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>S.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>W.</given-names></name></person-group> (<year>2014</year>). <article-title>Random telegraph noise and resistance switching analysis of oxide based resistive memory</article-title>. <source>Nanoscale</source> <volume>6</volume>, <fpage>400</fpage>&#x02013;<lpage>404</lpage>. <pub-id pub-id-type="doi">10.1039/C3NR05016E</pub-id><pub-id pub-id-type="pmid">24202235</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chu</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>B.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Hwang</surname> <given-names>H.</given-names></name> <name><surname>Jeon</surname> <given-names>M.</given-names></name> <name><surname>Lee</surname> <given-names>B. H.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Neuromorphic hardware system for visual pattern recognition with memristor array and CMOS neuron</article-title>. <source>IEEE. Trans. Ind. Electron</source>. <volume>62</volume>, <fpage>2410</fpage>&#x02013;<lpage>2419</lpage>. <pub-id pub-id-type="doi">10.1109/TIE.2014.2356439</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chua</surname> <given-names>L.</given-names></name></person-group> (<year>1971</year>). <article-title>Memristor-the missing circuit element</article-title>. <source>IEEE Trans. Circ. Theor.</source> <volume>18</volume>, <fpage>507</fpage>&#x02013;<lpage>519</lpage>. <pub-id pub-id-type="doi">10.1109/TCT.1971.1083337</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chua</surname> <given-names>L.</given-names></name></person-group> (<year>2013</year>). <article-title>Memristor, Hodgkin&#x02013;Huxley, and edge of Chaos</article-title>. <source>Nanotechnology</source> <volume>24</volume>:<fpage>383001</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/24/38/383001</pub-id><pub-id pub-id-type="pmid">23999613</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duan</surname> <given-names>S.</given-names></name> <name><surname>Hu</surname> <given-names>X.</given-names></name> <name><surname>Dong</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Mazumder</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Memristor-based cellular nonlinear/neural network: design, analysis, and applications</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>26</volume>, <fpage>1202</fpage>&#x02013;<lpage>1213</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2014.2334701</pub-id><pub-id pub-id-type="pmid">25069124</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fackenthal</surname> <given-names>R.</given-names></name> <name><surname>Kitagawa</surname> <given-names>M.</given-names></name> <name><surname>Otsuka</surname> <given-names>W.</given-names></name> <name><surname>Prall</surname> <given-names>K.</given-names></name> <name><surname>Mills</surname> <given-names>D.</given-names></name> <name><surname>Tsutsui</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>0.19.7 A 16Gb ReRAM with 200MB/s write and 1GB/s read in 27nm technology</article-title>, in <source>IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC)</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>338</fpage>&#x02013;<lpage>339</lpage>. <pub-id pub-id-type="doi">10.1109/ISSCC.2014.6757460</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>J.</given-names></name> <name><surname>Liao</surname> <given-names>Z.</given-names></name> <name><surname>Gong</surname> <given-names>N.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Mitigating nonlinear effect of memristive synaptic device for neuromorphic computing. <italic>IEEE. J. Emerg. Select. Top. Circ</italic></article-title>. <source>Syst.</source> <volume>9</volume>, <fpage>377</fpage>&#x02013;<lpage>387</lpage>. <pub-id pub-id-type="doi">10.1109/JETCAS.2019.2910749</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gi</surname> <given-names>S. G.</given-names></name> <name><surname>Yeo</surname> <given-names>I.</given-names></name> <name><surname>Chu</surname> <given-names>M.</given-names></name> <name><surname>Moon</surname> <given-names>K.</given-names></name> <name><surname>Hwang</surname> <given-names>H.</given-names></name> <name><surname>Lee</surname> <given-names>B. G.</given-names></name></person-group> (<year>2018</year>). <article-title>Modeling and system-level simulation for nonideal conductance response of synaptic devices</article-title>. <source>IEEE. Trans. Electron. Dev</source>. <volume>65</volume>, <fpage>3996</fpage>&#x02013;<lpage>4003</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2018.2858762</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hodgkin</surname> <given-names>A. L.</given-names></name> <name><surname>Huxley</surname> <given-names>A. F.</given-names></name></person-group> (<year>1952</year>). <article-title>A quantitative description of membrane current and its application to conduction and excitation in nerve</article-title>. <source>J. Physiol.</source> <volume>117</volume>, <fpage>500</fpage>&#x02013;<lpage>544</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.1952.sp004764</pub-id><pub-id pub-id-type="pmid">2185861</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>Q.</given-names></name> <name><surname>Zhao</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>Novel circuit designs of memristor synapse and neuron</article-title>. <source>Neurocomputing</source> <volume>330</volume>, <fpage>11</fpage>&#x02013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2018.11.043</pub-id><pub-id pub-id-type="pmid">23999381</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>M.</given-names></name> <name><surname>Graves</surname> <given-names>C. E.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Ge</surname> <given-names>N.</given-names></name> <name><surname>Montgomery</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Memristor-based analog computation and neural network classification with a dot product engine</article-title>. <source>Adv. Mater</source>. <volume>30</volume>:<fpage>1705914</fpage>. <pub-id pub-id-type="doi">10.1002/adma.201705914</pub-id><pub-id pub-id-type="pmid">29318659</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>Q.</given-names></name> <name><surname>Rose</surname> <given-names>G. S.</given-names></name> <name><surname>Linderman</surname> <given-names>R. W.</given-names></name></person-group> (<year>2014</year>). <article-title>Memristor crossbar-based neuromorphic computing system: a case study</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>25</volume>, <fpage>1864</fpage>&#x02013;<lpage>1878</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2013.2296777</pub-id><pub-id pub-id-type="pmid">25291739</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>R.</given-names></name> <name><surname>Chi</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <source>Memristor Neural Network Design</source>. <publisher-loc>London</publisher-loc>: <publisher-name>IntechOpen</publisher-name>.</citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ishii</surname> <given-names>M.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Lewis</surname> <given-names>S.</given-names></name> <name><surname>Okazaki</surname> <given-names>A.</given-names></name> <name><surname>Okazawa</surname> <given-names>J.</given-names></name> <name><surname>Ito</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>On-Chip Trainable 1.4M 6T2R PCM Synaptic Array with 1.6K Stochastic LIF Neurons for Spiking RBM</article-title>, in <source>IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>7</fpage>&#x02013;<lpage>11</lpage>.</citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeong</surname> <given-names>H.</given-names></name> <name><surname>Shi</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Memristor devices for neural networks</article-title>. <source>J. Phys. D Appl. Phys.</source> <volume>52</volume>:<fpage>023003</fpage>. <pub-id pub-id-type="doi">10.1088/1361-6463/aae223</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeong</surname> <given-names>Y.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Lu</surname> <given-names>W. D.</given-names></name></person-group> (<year>2015</year>). <article-title>Utilizing multiple state variables to improve the dynamic range of analog switching in a memristor</article-title>. <source>Appl. Phys. Lett</source>. <volume>107</volume>:<fpage>173105</fpage>. <pub-id pub-id-type="doi">10.1063/1.4934818</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>H.</given-names></name> <name><surname>Yamada</surname> <given-names>K.</given-names></name> <name><surname>Ren</surname> <given-names>Z.</given-names></name> <name><surname>Kwok</surname> <given-names>T.</given-names></name> <name><surname>Luo</surname> <given-names>F.</given-names></name> <name><surname>Yang</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Pulse-width modulation based dot-product engine for neuromorphic computing system using memristor crossbar array</article-title>, in <source>IEEE International Symposium on Circuits and Systems (ISCAS)</source> (<publisher-loc>Florence</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2018.8351276</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jo</surname> <given-names>S. H.</given-names></name> <name><surname>Chang</surname> <given-names>T.</given-names></name> <name><surname>Ebong</surname> <given-names>I.</given-names></name> <name><surname>Bhadviya</surname> <given-names>B. B.</given-names></name> <name><surname>Mazumder</surname> <given-names>P.</given-names></name> <name><surname>Lu</surname> <given-names>W.</given-names></name></person-group> (<year>2010</year>). <article-title>Nanoscale memristor device as synapse in neuromorphic systems</article-title>. <source>Nano Lett.</source> <volume>10</volume>, <fpage>1297</fpage>&#x02013;<lpage>1301</lpage>. <pub-id pub-id-type="doi">10.1021/nl904092h</pub-id><pub-id pub-id-type="pmid">20192230</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Joglekar</surname> <given-names>Y. N.</given-names></name> <name><surname>Wolf</surname> <given-names>S. J.</given-names></name></person-group> (<year>2009</year>). <article-title>The elusive memristor: properties of basic electrical circuits</article-title>. <source>Eur. J. Phys</source>. <volume>30</volume>:<fpage>661</fpage>. <pub-id pub-id-type="doi">10.1088/0143-0807/30/4/001</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kataeva</surname> <given-names>I.</given-names></name> <name><surname>Ohtsuka</surname> <given-names>S.</given-names></name> <name><surname>Nili</surname> <given-names>H.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Isobe</surname> <given-names>Y.</given-names></name> <name><surname>Yako</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Towards the development of analog neuromorphic chip prototype with 2.4 M integrated memristors</article-title>, in <source>IEEE International Symposium on Circuits and Systems (ISCAS)</source> (<publisher-loc>Sapporo</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2019.8702125</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Sah</surname> <given-names>M. P.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Roska</surname> <given-names>T.</given-names></name> <name><surname>Chua</surname> <given-names>L. O.</given-names></name></person-group> (<year>2011a</year>). <article-title>Memristor bridge synapses</article-title>. <source>Proc. IEEE</source> <volume>100</volume>, <fpage>2061</fpage>&#x02013;<lpage>2070</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2011.2166749</pub-id><pub-id pub-id-type="pmid">24807926</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Sah</surname> <given-names>M. P.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Roska</surname> <given-names>T.</given-names></name> <name><surname>Chua</surname> <given-names>L. O.</given-names></name></person-group> (<year>2011b</year>). <article-title>Neural synaptic weighting with a pulse-based memristor circuit</article-title>. <source>IEEE Trans. Circ. Syst. I Reg. Pap.</source> <volume>59</volume>, <fpage>148</fpage>&#x02013;<lpage>158</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2011.2161360</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Du</surname> <given-names>C.</given-names></name> <name><surname>Sheridan</surname> <given-names>P.</given-names></name> <name><surname>Ma</surname> <given-names>W.</given-names></name> <name><surname>Choi</surname> <given-names>S.</given-names></name> <name><surname>Lu</surname> <given-names>W. D.</given-names></name></person-group> (<year>2015</year>). <article-title>Experimental demonstration of a second-order memristor and its ability to biorealistically implement synaptic plasticity</article-title>. <source>Nano Lett.</source> <volume>15</volume>, <fpage>2203</fpage>&#x02013;<lpage>2211</lpage>. <pub-id pub-id-type="doi">10.1021/acs.nanolett.5b00697</pub-id><pub-id pub-id-type="pmid">25710872</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kozhevnikov</surname> <given-names>D. D.</given-names></name> <name><surname>Krasilich</surname> <given-names>N. V.</given-names></name></person-group> (<year>2016</year>). <article-title>Memristor-based hardware neural networks modelling review and framework concept</article-title>. <source>Proc. Inst. Syst. Prog. RAS</source> <volume>28</volume>, <fpage>243</fpage>&#x02013;<lpage>258</lpage>. <pub-id pub-id-type="doi">10.15514/ISPRAS-2016-28(2)-16</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krestinskaya</surname> <given-names>O.</given-names></name> <name><surname>James</surname> <given-names>A. P.</given-names></name> <name><surname>Chua</surname> <given-names>L. O.</given-names></name></person-group> (<year>2019</year>). <article-title>Neuromemristive circuits for edge computing: a review</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst</source>. <volume>31</volume>, <fpage>4</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2019.2899262</pub-id><pub-id pub-id-type="pmid">30892238</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krestinskaya</surname> <given-names>O.</given-names></name> <name><surname>Salama</surname> <given-names>K. N.</given-names></name> <name><surname>James</surname> <given-names>A. P.</given-names></name></person-group> (<year>2018</year>). <article-title>Learning in memristive neural network architectures using analog backpropagation circuits</article-title>. <source>IEEE. Trans. Circ. I</source> <volume>66</volume>, <fpage>719</fpage>&#x02013;<lpage>732</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2018.2866510</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kvatinsky</surname> <given-names>S.</given-names></name> <name><surname>Friedman</surname> <given-names>E. G.</given-names></name> <name><surname>Kolodny</surname> <given-names>A.</given-names></name> <name><surname>Weiser</surname> <given-names>U. C.</given-names></name></person-group> (<year>2012</year>). <article-title>TEAM: threshold adaptive memristor model</article-title>. <source>IEEE Trans. Circ. Syst. I Reg. Pap.</source> <volume>60</volume>, <fpage>211</fpage>&#x02013;<lpage>221</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2012.2215714</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lastras-Monta&#x000F1;o</surname> <given-names>M. A.</given-names></name> <name><surname>Cheng</surname> <given-names>K. T.</given-names></name></person-group> (<year>2018</year>). <article-title>Resistive random-access memory based on ratioed memristors</article-title>. <source>Nat. Electron</source>. <volume>1</volume>, <fpage>466</fpage>&#x02013;<lpage>472</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-018-0115-z</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>D.</given-names></name> <name><surname>Park</surname> <given-names>J.</given-names></name> <name><surname>Moon</surname> <given-names>K.</given-names></name> <name><surname>Jang</surname> <given-names>J.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Chu</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Oxide based nanoscale analog synapse device for neural signal recognition system</article-title>, in <source>IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>Washington, DC</publisher-loc>), <fpage>4</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/IEDM.2015.7409628</pub-id></citation></ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>K. J.</given-names></name> <name><surname>Cho</surname> <given-names>B. H.</given-names></name> <name><surname>Cho</surname> <given-names>W. Y.</given-names></name> <name><surname>Kang</surname> <given-names>S.</given-names></name> <name><surname>Choi</surname> <given-names>B. G.</given-names></name> <name><surname>Oh</surname> <given-names>H. R.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>A 90 nm 1.8 V 512 Mb diode-switch PRAM with 266 MB/s read throughput</article-title>. <source>IEEE. J. Solid State Circ.</source> <volume>43</volume>, <fpage>150</fpage>&#x02013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1109/JSSC.2007.908001</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Belkin</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Yan</surname> <given-names>P.</given-names></name> <name><surname>Hu</surname> <given-names>M.</given-names></name> <name><surname>Ge</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Efficient and self-adaptive <italic>in-situ</italic> learning in multilayer memristor neural networks</article-title>. <source>Nat. Commun.</source> <volume>9</volume>:<fpage>2385</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-018-04484-2</pub-id><pub-id pub-id-type="pmid">29921923</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Belkin</surname> <given-names>D.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <name><surname>Jiang</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Long short-term memory networks in memristor crossbar arrays</article-title>. <source>Nat. Mach. Intell.</source> <volume>1</volume>, <fpage>49</fpage>&#x02013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-018-0001-4</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Wen</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Xiong</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Fang</surname> <given-names>G.</given-names></name></person-group> (<year>2016</year>). <article-title><italic>In situ</italic> synthesis of 3D CoS nanoflake/Ni (OH) 2 nanosheet nanocomposite structure as a candidate supercapacitor electrode</article-title>. <source>Nanotechnology</source> <volume>27</volume>:<fpage>145401</fpage>. <pub-id pub-id-type="doi">10.1088/0957-4484/27/14/145401</pub-id><pub-id pub-id-type="pmid">26905933</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S. J.</given-names></name> <name><surname>Dong</surname> <given-names>B. Y.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>H. J.</given-names></name> <name><surname>He</surname> <given-names>Y. H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Alleviating conductance nonlinearity via pulse shape designs in TaO x memristive synapses</article-title>. <source>IEEE. Trans. Electron. Dev</source>. <volume>66</volume>, <fpage>810</fpage>&#x02013;<lpage>813</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2018.2876065</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>P.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Jiang</surname> <given-names>H.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Three-dimensional memristor circuits as complex neural networks</article-title>. <source>Nat. Electron.</source> <volume>3</volume>, <fpage>225</fpage>&#x02013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-020-0397-9</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Tang</surname> <given-names>Y.</given-names></name> <name><surname>Hu</surname> <given-names>W.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>3D Convolutional Neural Network based on memristor for video recognition</article-title>. <source>Pattern. Recogn. Lett</source>. <volume>130</volume>, <fpage>116</fpage>&#x02013;<lpage>124</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2018.12.005</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>T. Y.</given-names></name> <name><surname>Yan</surname> <given-names>T. H.</given-names></name> <name><surname>Scheuerlein</surname> <given-names>R.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Lee</surname> <given-names>J. K.</given-names></name> <name><surname>Balakrishnan</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>A 130.7mm<sup>2</sup> 2-layer 32Gb ReRAM memory device in 24nm technology</article-title>, in <source>IEEE International Solid-State Circuits Conference Digest of Technical Papers</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>210</fpage>&#x02013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1109/JSSC.2013.2280296</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>Q.</given-names></name> <name><surname>Xu</surname> <given-names>X.</given-names></name> <name><surname>Gong</surname> <given-names>T.</given-names></name> <name><surname>Lv</surname> <given-names>H.</given-names></name> <name><surname>Dong</surname> <given-names>D.</given-names></name> <name><surname>Ma</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>8-Layers 3D vertical RRAM with excellent scalability towards storage class memory applications</article-title>, in <source>IEEE International Electron Devices Meeting (IEDM)</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>2</fpage>.7.1&#x02013;2.7. 4.</citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>G.</given-names></name> <name><surname>Han</surname> <given-names>C.</given-names></name> <name><surname>Shen</surname> <given-names>Y.</given-names></name> <name><surname>Liang</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>A memristive neural network model with associative memory for modeling affections</article-title>. <source>IEEE Access.</source> <volume>6</volume>, <fpage>61614</fpage>&#x02013;<lpage>61622</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2875433</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Asapu</surname> <given-names>S.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Zhuo</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2019a</year>). <article-title>Artificial neural network (ANN) to spiking neural network (SNN) converters based on diffusive memristors</article-title>. <source>Adv. Electron. Mater</source>. <volume>5</volume>:<fpage>1900060</fpage>. <pub-id pub-id-type="doi">10.1002/aelm.201900060</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Asapu</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2019b</year>). <article-title>Reservoir computing using diffusive memristors</article-title>. <source>Adv. Intell. Syst</source>. <volume>1</volume>:<fpage>1900084</fpage>. <pub-id pub-id-type="doi">10.1002/aisy.201900084</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Savel&#x00027;ev</surname> <given-names>S. E.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Anatomy of Ag/Hafnia-based selectors with 10<sup>10</sup> nonlinearity</article-title>. <source>Adv. Mater.</source> <volume>29</volume>:<fpage>1604457</fpage>. <pub-id pub-id-type="doi">10.1002/adma.201604457</pub-id><pub-id pub-id-type="pmid">28134458</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nenadic</surname> <given-names>Z.</given-names></name> <name><surname>Ghosh</surname> <given-names>B. K.</given-names></name></person-group> (<year>2001</year>). <article-title>Computation with biological neurons</article-title>, in <source>Proceedings of the 2001 American Control Conference (Cat. No. 01CH37148)</source> (<publisher-loc>Arlington, VA</publisher-loc>), <fpage>257</fpage>&#x02013;<lpage>262</lpage>. <pub-id pub-id-type="doi">10.1109/ACC.2001.945552</pub-id></citation></ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ohno</surname> <given-names>T.</given-names></name> <name><surname>Hasegawa</surname> <given-names>T.</given-names></name> <name><surname>Tsuruoka</surname> <given-names>T.</given-names></name> <name><surname>Terabe</surname> <given-names>K.</given-names></name> <name><surname>Gimzewski</surname> <given-names>J. K.</given-names></name> <name><surname>Aono</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Short-term plasticity and long-term potentiation mimicked in single inorganic synapses</article-title>. <source>Nat. Mater.</source> <volume>10</volume>, <fpage>591</fpage>&#x02013;<lpage>595</lpage>. <pub-id pub-id-type="doi">10.1038/nmat3054</pub-id><pub-id pub-id-type="pmid">21706012</pub-id></citation></ref>
<ref id="B63">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ojiugwo</surname> <given-names>C. N.</given-names></name> <name><surname>Abdallah</surname> <given-names>A. B.</given-names></name> <name><surname>Thron</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Simulation of biological learning with spiking neural networks</article-title>, in <source>Implementations and Applications of Machine Learning</source>, Vol. 782, eds S. A. Subair and C. Thron (<publisher-loc>Cham: Springer</publisher-loc>),207&#x02013;227. <pub-id pub-id-type="doi">10.1007/978-3-030-37830-1_9</pub-id></citation></ref>
<ref id="B64">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Sheri</surname> <given-names>A.</given-names></name> <name><surname>Kim</surname> <given-names>J.</given-names></name> <name><surname>Noh</surname> <given-names>J.</given-names></name> <name><surname>Jang</surname> <given-names>J.</given-names></name> <name><surname>Jeon</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Neuromorphic speech systems using advanced ReRAM-based synapse</article-title>, in <source>IEEE International Electron Devices Meeting</source> (<publisher-loc>Washington, DC</publisher-loc>), <fpage>25</fpage>.6.1&#x02013;25.6.4. <pub-id pub-id-type="doi">10.1109/IEDM.2013.6724692</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfeiffer</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep learning with spiking neurons: opportunities and challenges</article-title>. <source>Front. Neurosci</source>. <volume>12</volume>:<fpage>774</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00774</pub-id><pub-id pub-id-type="pmid">30410432</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pickett</surname> <given-names>M. D.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name> <name><surname>Borghetti</surname> <given-names>J. L.</given-names></name> <name><surname>Yang</surname> <given-names>J. J.</given-names></name> <name><surname>Snider</surname> <given-names>G. S.</given-names></name> <name><surname>Stewart</surname> <given-names>D. R.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Switching dynamics in titanium dioxide memristive devices</article-title>. <source>J. Appl. Phys</source>. <volume>106</volume>:<fpage>074508</fpage>. <pub-id pub-id-type="doi">10.1063/1.3236506</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prezioso</surname> <given-names>M.</given-names></name> <name><surname>Merrikh-Bayat</surname> <given-names>F.</given-names></name> <name><surname>Hoskins</surname> <given-names>B. D.</given-names></name> <name><surname>Adam</surname> <given-names>G. C.</given-names></name> <name><surname>Likharev</surname> <given-names>K. K.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name></person-group> (<year>2015</year>). <article-title>Training and operation of an integrated neuromorphic network based on metal-oxide memristors</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>61</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1038/nature14441</pub-id><pub-id pub-id-type="pmid">25951284</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prodromakis</surname> <given-names>T.</given-names></name> <name><surname>Peh</surname> <given-names>B. P.</given-names></name> <name><surname>Papavassiliou</surname> <given-names>C.</given-names></name> <name><surname>Toumazou</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>A versatile memristor model with nonlinear dopant kinetics</article-title>. <source>IEEE. Trans. Electron Dev</source>. <volume>58</volume>, <fpage>3099</fpage>&#x02013;<lpage>3105</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2011.2158004</pub-id></citation></ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarwar</surname> <given-names>S. S.</given-names></name> <name><surname>Saqueb</surname> <given-names>S. A. N.</given-names></name> <name><surname>Quaiyum</surname> <given-names>F.</given-names></name> <name><surname>Rashid</surname> <given-names>A. H. U.</given-names></name></person-group> (<year>2013</year>). <article-title>Memristor-based nonvolatile random access memory: hybrid architecture for low power compact memory design</article-title>. <source>IEEE Access</source> <volume>1</volume>, <fpage>29</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2013.2259891</pub-id></citation></ref>
<ref id="B70">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Seo</surname> <given-names>J. S.</given-names></name> <name><surname>Brezzo</surname> <given-names>B.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Parker</surname> <given-names>B. D.</given-names></name> <name><surname>Esser</surname> <given-names>S. K.</given-names></name> <name><surname>Montoye</surname> <given-names>R. K.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons</article-title>, in <source>IEEE Custom Integrated Circuits Conference (CICC)</source> (<publisher-loc>San Jose, CA</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/CICC.2011.6055293</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seok</surname> <given-names>J. Y.</given-names></name> <name><surname>Song</surname> <given-names>S. J.</given-names></name> <name><surname>Yoon</surname> <given-names>J. H.</given-names></name> <name><surname>Yoon</surname> <given-names>K. J.</given-names></name> <name><surname>Park</surname> <given-names>T. H.</given-names></name> <name><surname>Kwon</surname> <given-names>D. E.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>A review of three-dimensional resistive switching cross-bar array memories from the integration and materials property points of view</article-title>. <source>Adv. Funct. Mater</source>. <volume>24</volume>, <fpage>5316</fpage>&#x02013;<lpage>5339</lpage>. <pub-id pub-id-type="doi">10.1002/adfm.201303520</pub-id></citation></ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shukla</surname> <given-names>A.</given-names></name> <name><surname>Ganguly</surname> <given-names>U.</given-names></name></person-group> (<year>2018</year>). <article-title>An on-chip trainable and the clock-less spiking neural network with 1R memristive synapses</article-title>. <source>IEEE. Trans. Biomed. Circ. Syst</source>. <volume>12</volume>, <fpage>884</fpage>&#x02013;<lpage>893</lpage>. <pub-id pub-id-type="doi">10.1109/TBCAS.2018.2831618</pub-id><pub-id pub-id-type="pmid">29993721</pub-id></citation></ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simmons</surname> <given-names>J. G.</given-names></name></person-group> (<year>1963</year>). <article-title>Generalized formula for the electric tunnel effect between similar electrodes separated by a thin insulating film</article-title>. <source>J. Appl. Phys</source>. <volume>34</volume>, <fpage>1793</fpage>&#x02013;<lpage>1803</lpage>. <pub-id pub-id-type="doi">10.1063/1.1702682</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smagulova</surname> <given-names>K.</given-names></name> <name><surname>Krestinskaya</surname> <given-names>O.</given-names></name> <name><surname>James</surname> <given-names>A. P.</given-names></name></person-group> (<year>2018</year>). <article-title>A memristor-based long short term memory circuit</article-title>. <source>Analog. Integr. Circ. Signal Process</source>. <volume>95</volume>, <fpage>467</fpage>&#x02013;<lpage>472</lpage>. <pub-id pub-id-type="doi">10.1007/s10470-018-1180-y</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Strukov</surname> <given-names>D. B.</given-names></name> <name><surname>Snider</surname> <given-names>G. S.</given-names></name> <name><surname>Stewart</surname> <given-names>D. R.</given-names></name> <name><surname>Williams</surname> <given-names>R. S.</given-names></name></person-group> (<year>2008</year>). <article-title>The missing memristor found</article-title>. <source>Nature</source> <volume>453</volume>, <fpage>80</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1038/nature06932</pub-id></citation></ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Strukov</surname> <given-names>D. B.</given-names></name> <name><surname>Williams</surname> <given-names>R. S.</given-names></name></person-group> (<year>2009</year>). <article-title>Exponential ionic drift: fast switching and low volatility of thin-film memristors</article-title>. <source>Appl. Phys. A</source> <volume>94</volume>, <fpage>515</fpage>&#x02013;<lpage>519</lpage>. <pub-id pub-id-type="doi">10.1007/s00339-008-4975-3</pub-id></citation></ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <source>CMOS and Memristor Technologies for Neuromorphic Computing Applications</source>. Technical Report No. UCB/EECS-2015&#x02013;S-2218, Electrical Engineering and Computer Sciences University of California at Berkeley.</citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>K.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Yan</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>The future of memristors: materials engineering and neural networks</article-title>. <source>Adv. Funct. Mater.</source> <volume>31</volume>:<fpage>2006773</fpage>. <pub-id pub-id-type="doi">10.1002/adfm.202006773</pub-id></citation></ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanikawa</surname> <given-names>T.</given-names></name> <name><surname>Ohnishi</surname> <given-names>K.</given-names></name> <name><surname>Kanoh</surname> <given-names>M.</given-names></name> <name><surname>Mukai</surname> <given-names>T.</given-names></name> <name><surname>Matsuoka</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Three-dimensional imaging of threading dislocations in GaN crystals using two-photon excitation photoluminescence</article-title>. <source>Appl. Phys. Express</source>. <volume>11</volume>:<fpage>031004</fpage>. <pub-id pub-id-type="doi">10.7567/APEX.11.031004</pub-id></citation></ref>
<ref id="B80">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tsai</surname> <given-names>H.</given-names></name> <name><surname>Ambrogio</surname> <given-names>S.</given-names></name> <name><surname>Mackin</surname> <given-names>C.</given-names></name> <name><surname>Narayanan</surname> <given-names>P.</given-names></name> <name><surname>Shelby</surname> <given-names>R. M.</given-names></name> <name><surname>Rocki</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Inference of long-short term memory networks at software-equivalent accuracy using 2.5M analog phase change memory devices</article-title>, in <source>Symposium on VLSI Technology</source> (<publisher-loc>Kyoto</publisher-loc>), <fpage>82</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.23919/VLSIT.2019.8776519</pub-id></citation></ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Upadhyay</surname> <given-names>N. K.</given-names></name> <name><surname>Sun</surname> <given-names>W.</given-names></name> <name><surname>Lin</surname> <given-names>P.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name> <name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A memristor with low switching current and voltage for 1s1r integration and array operation</article-title>. <source>Adv. Electron. Mater.</source> <volume>6</volume>:<fpage>1901411</fpage>. <pub-id pub-id-type="doi">10.1002/aelm.201901411</pub-id></citation></ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Volos</surname> <given-names>C. K.</given-names></name> <name><surname>Kyprianidis</surname> <given-names>I. M.</given-names></name> <name><surname>Stouboulos</surname> <given-names>I. N.</given-names></name> <name><surname>Tlelo-Cuautle</surname> <given-names>E.</given-names></name> <name><surname>Vaidyanathan</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Memristor: a new concept in synchronization of coupled neuromorphic circuits</article-title>. <source>J. Eng. Sci. Technol. Rev.</source> <volume>8</volume>, <fpage>157</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.25103/jestr.082.21</pub-id></citation></ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Yan</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>Overview of resistive random access memory (RRAM): materials, filament mechanisms, performance optimization, and prospects</article-title>. <source>Phys. Status. Solidi R</source>. <volume>13</volume>:<fpage>1900073</fpage>. <pub-id pub-id-type="doi">10.1002/pssr.201900073</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Roychowdhury</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Well-posed models of memristive devices</article-title>. <source>arXiv</source> preprint arXiv:1605.04897.</citation></ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name> <name><surname>Savel&#x00027;ev</surname> <given-names>S.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2018a</year>). <article-title>Fully memristive neural networks for pattern classification with unsupervised learning</article-title>. <source>Nat Electron.</source> <volume>1</volume>, <fpage>137</fpage>&#x02013;<lpage>145</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-018-0023-2</pub-id></citation></ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name> <name><surname>Saveliev</surname> <given-names>S. E.</given-names></name> <name><surname>Jiang</surname> <given-names>H.</given-names></name> <name><surname>Midya</surname> <given-names>R.</given-names></name> <name><surname>Lin</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing</article-title>. <source>Nat. Mater.</source> <volume>16</volume>, <fpage>101</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1038/nmat4756</pub-id><pub-id pub-id-type="pmid">27669052</pub-id></citation></ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Lin</surname> <given-names>P.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Nie</surname> <given-names>Y.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2019a</year>). <article-title><italic>In situ</italic> training of feed-forward and recurrent convolutional memristor networks</article-title>. <source>Nat. Mach. Intell.</source> <volume>1</volume>, <fpage>434</fpage>&#x02013;<lpage>442</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-019-0089-1</pub-id></citation></ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Belkin</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2019b</year>). <article-title>Reinforcement learning with analogue memristor arrays</article-title>. <source>Nat. Electron.</source> <volume>2</volume>, <fpage>115</fpage>&#x02013;<lpage>124</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-019-0221-6</pub-id></citation></ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Han</surname> <given-names>J. W.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Lin</surname> <given-names>P.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2018b</year>). <article-title>Capacitive neural network with neuro-transistors</article-title>. <source>Nat. Commun</source>. <volume>9</volume>:<fpage>3208</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-018-05677-5</pub-id></citation></ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Wu</surname> <given-names>H.</given-names></name> <name><surname>Burr</surname> <given-names>G. W.</given-names></name> <name><surname>Hwang</surname> <given-names>C. S.</given-names></name> <name><surname>Wang</surname> <given-names>K. L.</given-names></name> <name><surname>Xia</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Resistive switching materials for information processing</article-title>. <source>Nat. Rev. Mater.</source> <volume>5</volume>, <fpage>173</fpage>&#x02013;<lpage>195</lpage>. <pub-id pub-id-type="doi">10.1038/s41578-019-0159-3</pub-id></citation></ref>
<ref id="B91">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>W.</given-names></name> <name><surname>Kang</surname> <given-names>W.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Klein</surname> <given-names>J. O.</given-names></name> <name><surname>Chappert</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Ferroelectric tunnel memristor-based neuromorphic network with 1T1R crossbar architecture, <italic>International Joint Conference on Neural Networks (IJCNN)</italic></article-title> (<publisher-loc>Beijing</publisher-loc>), <fpage>29</fpage>&#x02013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN.2014.6889951</pub-id></citation></ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Waser</surname> <given-names>R.</given-names></name> <name><surname>Dittmann</surname> <given-names>R.</given-names></name> <name><surname>Staikov</surname> <given-names>G.</given-names></name> <name><surname>Szot</surname> <given-names>K.</given-names></name></person-group> (<year>2009</year>). <article-title>Redox-based resistive switching memories&#x02013;nanoionic mechanisms, prospects, and challenges</article-title>. <source>Adv. Mater.</source> <volume>21</volume>, <fpage>2632</fpage>&#x02013;<lpage>2663</lpage>. <pub-id pub-id-type="doi">10.1002/adma.200900375</pub-id></citation></ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>H. S. P.</given-names></name> <name><surname>Lee</surname> <given-names>H. Y.</given-names></name> <name><surname>Yu</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>Y. S.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>P. S.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Metal&#x02013;oxide RRAM</article-title>. <source>Proc. IEEE</source> <volume>100</volume>, <fpage>1951</fpage>&#x02013;<lpage>1970</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2012.2190369</pub-id></citation></ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Woo</surname> <given-names>J.</given-names></name> <name><surname>Yu</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Resistive memory-based analog synapse: the pursuit for linear and symmetric weight update</article-title>. <source>IEEE Nanotechnol. Mag.</source> <volume>12</volume>, <fpage>36</fpage>&#x02013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1109/MNANO.2018.2844902</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>X.</given-names></name> <name><surname>Saxena</surname> <given-names>V.</given-names></name></person-group> (<year>2018</year>). <article-title>Dendritic-inspired processing enables bio-plausible STDP in compound binary synapses</article-title>. <source>IEEE. Trans. Nanotechnol</source>. <volume>18</volume>, <fpage>149</fpage>&#x02013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1109/TNANO.2018.2871680</pub-id></citation></ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xia</surname> <given-names>Q.</given-names></name> <name><surname>Yang</surname> <given-names>J. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Memristive crossbar arrays for brain-inspired computing</article-title>. <source>Nat. Mater</source>. <volume>18</volume>, <fpage>309</fpage>&#x02013;<lpage>323</lpage>. <pub-id pub-id-type="doi">10.1038/s41563-019-0291-x</pub-id><pub-id pub-id-type="pmid">30940894</pub-id></citation></ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>C.-X.</given-names></name> <name><surname>Chiu</surname> <given-names>Y.-C.</given-names></name> <name><surname>Liu</surname> <given-names>T.-W.</given-names></name> <name><surname>Huang</surname> <given-names>T.-Y.</given-names></name> <name><surname>Liu</surname> <given-names>J.-S.</given-names></name> <name><surname>Chang</surname> <given-names>T.-W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A CMOS-integrated compute-in-memory macro based on resistive random-access memory for AI edge devices</article-title>. <source>Nat. Electron</source>. <volume>4</volume>, <fpage>81</fpage>&#x02013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-020-00505-5</pub-id></citation></ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019a</year>). <article-title>Flexible transparent organic artificial synapse based on the tungsten/egg albumen/indium tin oxide/polyethylene terephthalate memristor</article-title>. <source>ACS. Appl. Mater. Inter</source>. <volume>11</volume>, <fpage>18654</fpage>&#x02013;<lpage>18661</lpage>. <pub-id pub-id-type="doi">10.1021/acsami.9b04443</pub-id><pub-id pub-id-type="pmid">31038906</pub-id></citation></ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Pei</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2019b</year>). <article-title>Self-assembled networked PbS distribution quantum dots for resistive switching and artificial synapse performance boost of memristors</article-title>. <source>Adv. Mater</source>. <volume>31</volume>:<fpage>1805284</fpage>. <pub-id pub-id-type="doi">10.1002/adma.201805284</pub-id><pub-id pub-id-type="pmid">30589113</pub-id></citation></ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>K.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019c</year>). <article-title>A new memristor with 2D Ti3C2Tx MXene flakes as an artificial bio-synapse</article-title>. <source>Small</source> <volume>15</volume>, <fpage>1900107</fpage>. <pub-id pub-id-type="doi">10.1002/smll.201900107</pub-id><pub-id pub-id-type="pmid">31066210</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2018a</year>). <article-title>Graphene oxide quantum dots based memristors with progressive conduction tuning for artificial synaptic learning</article-title>. <source>Adv. Funct. Mater.</source> <volume>28</volume>:<fpage>1803728</fpage>. <pub-id pub-id-type="doi">10.1002/adfm.201803728</pub-id></citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Highly improved performance in Zr 0.5 Hf 0.5 O<sub>2</sub> films inserted with graphene oxide quantum dots layer for resistive switching non-volatile memory</article-title>. <source>J. Mater. Chem. C</source>. <volume>5</volume>, <fpage>11046</fpage>&#x02013;<lpage>11052</lpage>. <pub-id pub-id-type="doi">10.1039/C7TC03037A</pub-id></citation></ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>S.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2018b</year>). <article-title>Memristor with Ag-cluster-doped TiO<sub>2</sub> films as artificial synapse for Neuroinspired computing</article-title>. <source>Adv. Funct. Mater</source>. <volume>28</volume>:<fpage>1705320</fpage>. <pub-id pub-id-type="doi">10.1002/adfm.201705320</pub-id></citation></ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>Q.</given-names></name> <name><surname>Chen</surname> <given-names>A. P.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2019d</year>). <article-title>Vacancy-induced synaptic behavior in 2D WS2 nanosheet&#x02013;based memristor for low-power neuromorphic computing</article-title>. <source>Small</source> <volume>15</volume>:<fpage>1901423</fpage>. <pub-id pub-id-type="doi">10.1002/smll.201901423</pub-id><pub-id pub-id-type="pmid">31045332</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Yuan</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2018c</year>). <article-title>Flexible memristors as electronic synapses for neuro-inspired computation based on scotch tape-exfoliated mica substrates</article-title>. <source>Nano. Res</source>. <volume>11</volume>, <fpage>1183</fpage>&#x02013;<lpage>1192</lpage>. <pub-id pub-id-type="doi">10.1007/s12274-017-1781-2</pub-id></citation></ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Adhikari</surname> <given-names>S. P.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>On learning with nonlinear memristor-based neural network and its replication</article-title>. <source>IEEE. Trans. Circ. I</source>. <volume>66</volume>, <fpage>3906</fpage>&#x02013;<lpage>3916</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2019.2914125</pub-id></citation></ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Choi</surname> <given-names>H.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Sah</surname> <given-names>M. P.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Chua</surname> <given-names>L. O.</given-names></name></person-group> (<year>2014</year>). <article-title>A memristor emulator as a replacement of a real memristor</article-title>. <source>Semicond. Sci. Technol</source>. 30, 015007. <pub-id pub-id-type="doi">10.1088/0268-1242/30/1/015007</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J. J.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name> <name><surname>Stewart</surname> <given-names>D. R.</given-names></name></person-group> (<year>2013</year>). <article-title>Memristive devices for computing</article-title>. <source>Nat. Nanotechnol.</source> <volume>8</volume>, <fpage>13</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1038/nnano.2012.240</pub-id></citation></ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yao</surname> <given-names>P.</given-names></name> <name><surname>Wu</surname> <given-names>H.</given-names></name> <name><surname>Gao</surname> <given-names>B.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Q.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Fully hardware-implemented memristor convolutional neural network</article-title>. <source>Nature</source> <volume>577</volume>, <fpage>641</fpage>&#x02013;<lpage>646</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-020-1942-4</pub-id><pub-id pub-id-type="pmid">31996818</pub-id></citation></ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Friedman</surname> <given-names>E. G.</given-names></name></person-group> (<year>2017</year>). <article-title>Memristor-based circuit design for multilayer neural networks</article-title>. <source>IEEE. Trans. Circ. I</source>. <volume>65</volume>, <fpage>677</fpage>&#x02013;<lpage>686</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2017.2729787</pub-id><pub-id pub-id-type="pmid">29677559</pub-id></citation></ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Zhu</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Rao</surname> <given-names>M.</given-names></name> <name><surname>Song</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Brain-inspired computing with memristors: challenges in devices, circuits, and systems</article-title>. <source>Appl. Phys. Rev</source>. <volume>7</volume>:<fpage>011308</fpage>. <pub-id pub-id-type="doi">10.1063/1.5124027</pub-id></citation></ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>An electronic synapse memristor device with conductance linearity using quantized conduction for neuroinspired computing</article-title>. <source>J. Mater. Chem. C</source> <volume>7</volume>, <fpage>1298</fpage>&#x02013;<lpage>1306</lpage>. <pub-id pub-id-type="doi">10.1039/C8TC04395G</pub-id></citation></ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Q.</given-names></name> <name><surname>Xie</surname> <given-names>Z.</given-names></name> <name><surname>Peng</surname> <given-names>Y. P.</given-names></name> <name><surname>Wang</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Current status and prospects of memristors based on novel 2D materials</article-title>. <source>Mater. Horiz</source>. <volume>7</volume>, <fpage>1495</fpage>&#x02013;<lpage>1518</lpage>. <pub-id pub-id-type="doi">10.1039/C9MH02033K</pub-id></citation>
</ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>N.</given-names></name> <name><surname>Mazumder</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <article-title>Learning in memristor crossbar-based spiking neural networks through modulation of weight-dependent spike-timing-dependent plasticity</article-title>. <source>IEEE. Trans. Nanotechnol</source>. <volume>17</volume>, <fpage>520</fpage>&#x02013;<lpage>532</lpage>. <pub-id pub-id-type="doi">10.1109/TNANO.2018.2821131</pub-id></citation></ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zidan</surname> <given-names>M. A.</given-names></name> <name><surname>Fahmy</surname> <given-names>H. A. H.</given-names></name> <name><surname>Hussain</surname> <given-names>M. M.</given-names></name> <name><surname>Salama</surname> <given-names>K. N.</given-names></name></person-group> (<year>2013</year>). <article-title>Memristor-based memory: the sneak paths problem and solutions</article-title>. <source>Microelectron. J</source>. <volume>44</volume>, <fpage>176</fpage>&#x02013;<lpage>183</lpage>. <pub-id pub-id-type="doi">10.1016/j.mejo.2012.10.001</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work was financially supported by the National Natural Science Foundation of China (grant nos. 62064002, 61674050, and 61874158), the Project of Distinguished Young of Hebei Province (grant no. A2018201231), the Hundred Persons Plan of Hebei Province (grant nos. E2018050004 and E2018050003), the Supporting Plan for 100 Excellent Innovative Talents in Colleges and Universities of Hebei Province (grant no. SLRC2019018), Special project of strategic leading science and technology of Chinese Academy of Sciences (grant no. XDB44000000-7), outstanding young scientific research and innovation team of Hebei University, Special support funds for national high level talents (041500120001 and 521000981426), Hebei University graduate innovation funding project in 2021 (grant no. HBU2021bs013), and the Foundation of Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology (No. DH201908).</p>
</fn>
</fn-group>
</back>
</article> 