<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Nanotechnol.</journal-id>
<journal-title>Frontiers in Nanotechnology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Nanotechnol.</abbrev-journal-title>
<issn pub-type="epub">2673-3013</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1146852</article-id>
<article-id pub-id-type="doi">10.3389/fnano.2023.1146852</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Nanotechnology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Choose your tools carefully: a comparative evaluation of deterministic vs. stochastic and binary vs. analog neuron models for implementing emerging computing paradigms</article-title>
<alt-title alt-title-type="left-running-head">Morshed et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnano.2023.1146852">10.3389/fnano.2023.1146852</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Morshed</surname>
<given-names>Md Golam</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1578591/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ganguly</surname>
<given-names>Samiran</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ghosh</surname>
<given-names>Avik W.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2230495/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Electrical and Computer Engineering</institution>, <institution>University of Virginia</institution>, <addr-line>Charlottesville</addr-line>, <addr-line>VA</addr-line>, <country>United States</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Electrical and Computer Engineering</institution>, <institution>Virginia Commonwealth University</institution>, <addr-line>Richmond</addr-line>, <addr-line>VA</addr-line>, <country>United States</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Physics</institution>, <institution>University of Virginia</institution>, <addr-line>Charlottesville</addr-line>, <addr-line>VA</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1129644/overview">Gina Adam</ext-link>, George Washington University, United States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/775713/overview">Maryam Parsa</ext-link>, George Mason University, United States</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1376878/overview">Takashi Tsuchiya</ext-link>, National Institute for Materials Science, Japan</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Md Golam Morshed, <email>mm8by@virginia.edu</email>; Samiran Ganguly, <email>gangulys2@vcu.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>03</day>
<month>05</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>5</volume>
<elocation-id>1146852</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>01</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>04</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Morshed, Ganguly and Ghosh.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Morshed, Ganguly and Ghosh</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Neuromorphic computing, commonly understood as a computing approach built upon neurons, synapses, and their dynamics, as opposed to Boolean gates, is gaining large mindshare due to its direct application in solving current and future computing technological problems, such as smart sensing, smart devices, self-hosted and self-contained devices, artificial intelligence (AI) applications, etc. In a largely software-defined implementation of neuromorphic computing, it is possible to throw enormous computational power or optimize models and networks depending on the specific nature of the computational tasks. However, a hardware-based approach needs the identification of well-suited neuronal and synaptic models to obtain high functional and energy efficiency, which is a prime concern in size, weight, and power (SWaP) constrained environments. In this work, we perform a study on the characteristics of hardware neuron models (namely, inference errors, generalizability and robustness, practical implementability, and memory capacity) that have been proposed and demonstrated using a plethora of emerging nano-materials technology-based physical devices, to quantify the performance of such neurons on certain classes of problems that are of great importance in real-time signal processing like tasks in the context of reservoir computing. We find that the answer on which neuron to use for what applications depends on the particulars of the application requirements and constraints themselves, i.e., we need not only a hammer but all sorts of tools in our tool chest for high efficiency and quality neuromorphic computing.</p>
</abstract>
<kwd-group>
<kwd>neuromorphic computing</kwd>
<kwd>analog neuron</kwd>
<kwd>binary neuron</kwd>
<kwd>analog stochastic neuron</kwd>
<kwd>binary stochastic neuron</kwd>
<kwd>reservoir computing</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Nanodevices</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>High-performance computing has historically developed around the Boolean computing paradigm, executed on silicon (Si) complementary metal oxide semiconductor (CMOS) hardware. In fact, software has for decades been developed around the CMOS fabric that has singularly dictated our choice of materials, devices, circuits, and architecture&#x2013;leading to the dominant processor design paradigm: von Neumann architecture that separates memory and processing units. Over the last decade, however, Moore&#x2019;s law for hardware scaling has significantly slowed down, primarily due to the prohibitive energy cost of computing and an increasingly steep memory wall. At the same time, software development has significantly evolved around &#x201c;Big Data&#x201d; paradigm, with machine learning and artificial intelligence (AI) dominating the roost. Additionally, the push towards the internet of things (IoT) edge devices has prompted an intensive search for energy-efficient and compact hardware systems for on-chip data processing (<xref ref-type="bibr" rid="B8">Big data, 2018</xref>).</p>
<p>One such direction is neuromorphic computing, which uses the concept of mimicking a human brain architecture to design circuits and systems that can perform highly energy-efficient computations (<xref ref-type="bibr" rid="B51">Mead, 1990</xref>; <xref ref-type="bibr" rid="B64">Schuman et al., 2017</xref>; <xref ref-type="bibr" rid="B50">Markovi&#x107; et al., 2020</xref>; <xref ref-type="bibr" rid="B14">Christensen et al., 2022</xref>; <xref ref-type="bibr" rid="B42">Kireev et al., 2022</xref>). A human brain is primarily composed of two functional elemental units - synapses and neurons. Neurons are interconnected through synapses with different connection strengths (commonly known as synaptic weights), which provide the learning and memory capabilities of the brain. A neuron receives synaptic inputs from other neurons, generates output in the form of action potentials, and distributes the output to the subsequent neurons. A human brain has <inline-formula id="inf1">
<mml:math id="m1">
<mml:mo>&#x223c;</mml:mo>
<mml:mn>1</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> neurons and <inline-formula id="inf2">
<mml:math id="m2">
<mml:mo>&#x223c;</mml:mo>
<mml:mn>1</mml:mn>
<mml:msup>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>15</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> synapses and consumes <inline-formula id="inf3">
<mml:math id="m3">
<mml:mo>&#x223c;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10</mml:mn>
</mml:math>
</inline-formula> f&#x2009;J per synaptic event (<xref ref-type="bibr" rid="B40">Kandel et al., 2000</xref>; <xref ref-type="bibr" rid="B72">Squire et al., 2012</xref>; <xref ref-type="bibr" rid="B78">Upadhyay et al., 2016</xref>).</p>
<p>To emulate the organization and functionality of a human brain, there are many proposals for physical neuromorphic computing systems using memristors (<xref ref-type="bibr" rid="B83">Yao et al., 2020</xref>; <xref ref-type="bibr" rid="B18">Duan et al., 2020</xref>; <xref ref-type="bibr" rid="B54">Moon et al., 2019</xref>), spintronics (<xref ref-type="bibr" rid="B27">Grollier et al., 2020</xref>; <xref ref-type="bibr" rid="B47">Locatelli et al., 2014</xref>; <xref ref-type="bibr" rid="B49">Lv et al., 2022</xref>), charge-density-wave (CDW) devices (<xref ref-type="bibr" rid="B46">Liu et al., 2021</xref>), photonics (<xref ref-type="bibr" rid="B69">Shastri et al., 2021</xref>; <xref ref-type="bibr" rid="B68">Shainline et al., 2017</xref>), etc. In recent years, there has been significant progress in the development of physical neuromorphic hardware, both in academia and industry. The hierarchy of neuromorphic hardware implementation spans from the system level to the device level and all the way down to the level of the material. At the system level, various large-scale neuromorphic computers utilize different approaches - for instance, IBM&#x2019;s TrueNorth (<xref ref-type="bibr" rid="B53">Merolla et al., 2014</xref>), Intel&#x2019;s Loihi (<xref ref-type="bibr" rid="B17">Davies et al., 2018</xref>), SpiNNaker (<xref ref-type="bibr" rid="B22">Furber et al., 2014</xref>), BrainScaleS (<xref ref-type="bibr" rid="B62">Schemmel et al., 2010</xref>), Tianjic chip (<xref ref-type="bibr" rid="B57">Pei et al., 2019</xref>), Neurogrid (<xref ref-type="bibr" rid="B6">Benjamin et al., 2014</xref>), etc. They support a broad class of problems ranging from complex to more general computations. At the device level, the most commonly used component is the memristor which can be utilized in synapse and neuron implementations (<xref ref-type="bibr" rid="B39">Jo et al., 2010</xref>; <xref ref-type="bibr" rid="B67">Serb et al., 2020</xref>; <xref ref-type="bibr" rid="B33">Innocenti et al., 2021</xref>; <xref ref-type="bibr" rid="B52">Mehonic and Kenyon, 2016</xref>). Memristor crossbars are frequently used to represent synapses in neuromorphic systems (<xref ref-type="bibr" rid="B3">Adam et al., 2016</xref>; <xref ref-type="bibr" rid="B31">Hu et al., 2014</xref>). Memristor can also provide stochasticity in the neuron model (<xref ref-type="bibr" rid="B73">Suri et al., 2015</xref>). Another emerging class of devices for neuromorphic computing is spintronics devices (<xref ref-type="bibr" rid="B27">Grollier et al., 2020</xref>). Spintronics devices can be implemented with low energy and high density and are compatible with existing CMOS technology (<xref ref-type="bibr" rid="B66">Sengupta et al., 2016a</xref>). The spintronics devices utilized in neuromorphic computing include spin-torque devices (<xref ref-type="bibr" rid="B76">Torrejon et al., 2017</xref>; <xref ref-type="bibr" rid="B61">Roy et al., 2014</xref>; <xref ref-type="bibr" rid="B65">Sengupta et al., 2016b</xref>), magnetic domain walls (<xref ref-type="bibr" rid="B70">Siddiqui et al., 2020</xref>; <xref ref-type="bibr" rid="B44">Leonard et al., 2022</xref>; <xref ref-type="bibr" rid="B9">Brigner et al., 2022</xref>), and skyrmions (<xref ref-type="bibr" rid="B35">Jadaun et al., 2022</xref>; <xref ref-type="bibr" rid="B71">Song et al., 2020</xref>). Optical or photonics devices are also implemented for neurons and synapses in recent years (<xref ref-type="bibr" rid="B69">Shastri et al., 2021</xref>; <xref ref-type="bibr" rid="B60">Romeira et al., 2016</xref>; <xref ref-type="bibr" rid="B28">Guo et al., 2021</xref>). The field is very new and many novel forms of neuron and synaptic devices can be designed to match the mathematical model of neural networks (NNs). Physical neuromorphic computing can implement these functionalities directly in their physical characteristics (I-I, V-V, I-V), which results in highly compact devices that are well-suited for scalable and energy-efficient neuromorphic systems (<xref ref-type="bibr" rid="B12">Camsari et al., 2017a</xref>; <xref ref-type="bibr" rid="B13">Camsari et al., 2017b</xref>; <xref ref-type="bibr" rid="B23">Ganguly et al., 2021</xref>; <xref ref-type="bibr" rid="B82">Yang et al., 2013</xref>). This is critical as current NN-based computing is highly centralized (resident-on and accessed-via cloud) and is energy inefficient because the underlying volatile, often von Neumann, digital Boolean-based system design unit has to emulate inherently analog, mostly non-volatile distributed computing model of neural systems, even if at a simple abstraction level (<xref ref-type="bibr" rid="B53">Merolla et al., 2014</xref>). Recent advances in custom design such as FPGAs (<xref ref-type="bibr" rid="B81">Wang et al., 2018</xref>) and more experimental Si FPNAs (<xref ref-type="bibr" rid="B21">Farquhar et al., 2006</xref>) have demonstrated that a new form of device design rather than emulation is the way to go, and physical neuromorphic computing based on emerging technology can go a long way to achieve this (<xref ref-type="bibr" rid="B59">Rajendran and Alibart, 2016</xref>).</p>
<p>There is an increased use of noise-as-a-feature rather than a nuisance in NN models (<xref ref-type="bibr" rid="B20">Faisal et al., 2008</xref>; <xref ref-type="bibr" rid="B4">Baldassi et al., 2018</xref>; <xref ref-type="bibr" rid="B25">Goldberger and Ben-Reuven, 2017</xref>), and physical neuromorphic computing can provide natural stochasticity, with various noise colors depending on the device physics (<xref ref-type="bibr" rid="B80">Vincent et al., 2015</xref>; <xref ref-type="bibr" rid="B10">Brown et al., 2019</xref>). Some prominent areas where stochasticity and noise have been used include training generalizability (<xref ref-type="bibr" rid="B38">Jim et al., 1996</xref>), stochastic sampling (<xref ref-type="bibr" rid="B15">Cook, 1986</xref>), and recently proposed and coming into prominence, diffusion-based generative models (<xref ref-type="bibr" rid="B32">Huang et al., 2021</xref>). In all these models, noise plays a fundamental role, i.e., these algorithms do not work without inherent noise.</p>
<p>It is therefore critical to study and analyze the kinds of devices that will be useful to implement physical neuromorphic computing. We understand from neurobiology that there is a large degree of neuron design customization that has developed through evolution to obtain high task-based performance. Similarly, a variety of mathematical models of neurons have been designed in NN literature as well (<xref ref-type="bibr" rid="B64">Schuman et al., 2017</xref>; <xref ref-type="bibr" rid="B11">Burkitt, 2006</xref>; <xref ref-type="bibr" rid="B23">Ganguly et al., 2021</xref>). It is quite likely that the area of physical neuromorphics will use a variety of device designs rather than the uniformity of NAND gate-based design commonly seen in Boolean-based design, to achieve the true benefits of energy efficiency and scalability brought forth by this paradigm of system design.</p>
<p>In this work, we study a subset of this wide variety of neuron designs that are well-represented and easily available from many proposed physical neuromorphic platforms to understand and analyze their task specialization. In particular, we analyze analog and binary neuron models, including stochasticity in the model, for analog temporal inferencing tasks, and evaluate and compare their performances. We numerically estimate the performance metric normalized means squared error (NMSE), discuss the effect of stochasticity on prediction accuracy vs. robustness, and show the hardware implementability of the models. Furthermore, we estimate the memory capacity for different neuron models. Our results suggest that analog stochastic neurons perform better for analog temporal inferencing tasks both in terms of prediction accuracy and hardware implementability. Additionally, analog neurons show larger memory capacity. Our findings may provide a potential path forward toward efficient neuromorphic computing.</p>
</sec>
<sec id="s2">
<title>2 Brief overview on neuron models</title>
<p>An essential function of a neuron in a NN is processing the weighted synaptic inputs and generating an output response. A single biological neuron itself is a complex dynamical system (<xref ref-type="bibr" rid="B7">Bick et al., 2020</xref>). Proposed artificial neurons in most implementations of NNs (either software or hardware) are significantly simpler unless they specifically attempt to mimic the biological neuron (<xref ref-type="bibr" rid="B29">Harmon, 1959</xref>; <xref ref-type="bibr" rid="B64">Schuman et al., 2017</xref>; <xref ref-type="bibr" rid="B63">2022</xref>). As such their mathematical representations are cheaper and a significant amount of computational capabilities derive from the network itself. However, a NN is an interplay of the neurons, the synapses, and the network structure itself, and therefore the neuron model itself may provide certain capabilities that can help make a more efficient NN, in the context of the application specialization (<xref ref-type="bibr" rid="B1">Abiodun et al., 2018</xref>).</p>
<p>The set of behavior over which such neurons can be classified and analyzed is vast and may include spiking vs. non-spiking behavior with associated data representation, deterministic vs. stochastic output response function, discrete (or binary) vs. continuous (or analog) output response function, the particular mathematical model of the output response function itself (e.g., sigmoid, tanh, ReLU), presence or absence of memory states with a neuron, etc (<xref ref-type="bibr" rid="B26">Goodfellow et al., 2016</xref>; <xref ref-type="bibr" rid="B16">Davidson and Furber, 2021</xref>; <xref ref-type="bibr" rid="B5">Barna and Kaski, 1990</xref>). In the software NN world, specialization of certain neural models and connectivity are well appreciated, as an example sparse vs. dense vs. convolutional layers, or the use of ReLU neurons in the hidden layers vs. sigmoidal, softmax layers at outputs employed in many computer vision tasks (<xref ref-type="bibr" rid="B74">Szanda&#x142;a, 2020</xref>; <xref ref-type="bibr" rid="B84">Zhang and Woodland, 2015</xref>; <xref ref-type="bibr" rid="B56">Oostwal et al., 2021</xref>). <xref ref-type="fig" rid="F1">Figure 1A</xref> schematically shows the output characteristics of different types of widely used neuron models.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>
<bold>(A)</bold> Schematic of different types of widely used neuron models with their output characteristics. In the bottom panel, all the red curves represent the deterministic neurons&#x2019; output characteristics. In the top panel, the blue curves represent the actual stochastic output characteristics while the red is the corresponding deterministic/expected value of the output <inline-formula id="inf4">
<mml:math id="m4">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>c</mml:mi>
<mml:mspace width="0.3333em" class="nbsp"/>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>&#x3e;</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> characteristics. Spiking neurons (SpN and SSpN) can be considered in between the two limits of purely binary vs. purely analog neurons. Please note that we only analyze the analog and binary neurons (including their stochastic counterparts) in this work, as indicated by the purple-colored bold font labels. <bold>(B)</bold> Schematic of a reservoir setup using neurons connected with each other bidirectionally with random weights.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g001.tif"/>
</fig>
<p>In this work, we have focused on two particular behaviors of neural models that we believe can capture a significant application space, particularly in the domain of lightweight real-time signal processing tasks, and are readily built from emerging materials technology. We specifically look at binary vs. analog and deterministic vs. stochastic neuron output response functions (purple-colored bold font labels in <xref ref-type="fig" rid="F1">Figure 1A</xref>). We also use them in a reservoir computing (RC)-like context for signal processing tasks for our analysis. Reservoir computing uses the dynamics of a recurrently connected network of neurons to project an input (spatio-)temporal signal onto a high dimensional phase space, which forms the basis of inference, typically via a shallow 1-layer linear transform or a multi-layer feedforward network (<xref ref-type="bibr" rid="B75">Tanaka et al., 2019</xref>; <xref ref-type="bibr" rid="B77">Triefenbach et al., 2010</xref>; <xref ref-type="bibr" rid="B37">Jalalvand et al., 2015</xref>; <xref ref-type="bibr" rid="B24">Ganguly et al., 2018</xref>; <xref ref-type="bibr" rid="B54">Moon et al., 2019</xref>). A schematic of a reservoir is shown in <xref ref-type="fig" rid="F1">Figure 1B</xref> where the neurons are connected with each other bidirectionally with random weights. Multiple reservoirs may be connected hierarchically for more complex deep RC architecture. RC may be considered as a machine learning analog of an extended Kalman filter where the state space and the observation models are learned and not designed <italic>a priori</italic> (<xref ref-type="bibr" rid="B75">Tanaka et al., 2019</xref>).</p>
<p>Our choice of evaluating these specific behavior differences on an RC-based NN reflects the prominent use-case that is made out for many emerging nano-materials technology-based neuron and synaptic devices, viz. energy-efficient learning, and inference at the edge. These tasks often end up involving temporal or spatio-temporal data processing to extract relevant and actionable information, some examples being anomaly detection (<xref ref-type="bibr" rid="B41">Kato et al., 2022</xref>), feature tracking (<xref ref-type="bibr" rid="B2">Abreu Araujo et al., 2020</xref>), optimal control (<xref ref-type="bibr" rid="B19">Engedy and Horv&#xe1;th, 2012</xref>), and event prediction (<xref ref-type="bibr" rid="B58">Pyragas and Pyragas, 2020</xref>), all of which are well-suited for an RC-based NN. Therefore this testbench forms a great intersection for our analysis.</p>
<p>It should be noted that we do not include spiking neurons in this particular analysis. Spiking neurons have significantly different data encoding (level vs. rate or inter-spike interval encoding) and learning mechanisms (back-propagation or regression vs. spike-time dependent plasticity) that it is hard to disentangle the neuron model itself from demonstrated tasks, therefore we leave such a contrasting analysis of spiking neuron devices with non-spiking variants for a future study.</p>
<p>The neurons are modeled in the following way:<disp-formula id="e1">
<mml:math id="m5">
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
<label>(1)</label>
</disp-formula>
</p>
<p>Here the symbols have the usual meaning, i.e., <bold>y</bold> is the output activation of the neuron, <italic>f</italic>
<sub>
<italic>N</italic>
</sub> is the activation function, which is a sigmoidal or hyperbolic tangent for most non-spiking hardware neurons, and <italic>r</italic>
<sub>
<italic>N</italic>
</sub> is a random sample drawn from a random uniform distribution to represent stochasticity. It is possible to use a ReLU-like activation function or some other distribution for sampling stochasticity, particularly if the hardware neuron shows colored noise behavior, we do not particularize for such details and keep the analysis confined to the most common hardware neuron variants. Therefore, in our analysis, the <italic>r</italic>
<sub>
<italic>N</italic>
</sub> term is weighed down by an arbitrary factor to mimic the degree of stochasticity displayed by the neuron, and the <italic>f</italic>
<sub>
<italic>N</italic>
</sub> is either a continuous tanh() for analog neuron or a sgn(tanh()) for a binary neuron (sgn() being the signum function).</p>
</sec>
<sec sec-type="methods" id="s3">
<title>3 Methods</title>
<p>As discussed previously, the neuron models are analyzed in the context of a reservoir computer, specifically an echo-state network (ESN). An ESN is composed of a collection of recurrently connected neurons, with randomly distributed weights of the interconnects within this collection (<xref ref-type="bibr" rid="B48">Luko&#x161;evi&#x10d;ius, 2012</xref>; <xref ref-type="bibr" rid="B45">Li et al., 2012</xref>). This forms the &#x201c;reservoir&#x201d;, which is activated by an incoming signal, and whose output is read by an output layer trained via linear regression.</p>
<p>We employ different neuron models in this work, such as analog and binary neurons (with and without stochasticity in the model), which makes a total of four models at our disposal, namely, analog neuron (AN), analog stochastic neuron (ASN), binary neuron (BN), and binary stochastic neuron (BSN). The dynamical equations of the reservoirs built using different neuron models are described as follows (<xref ref-type="bibr" rid="B23">Ganguly et al., 2021</xref>):<disp-formula id="e2">
<mml:math id="m6">
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mtext>AN</mml:mtext>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="italic">tanh</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">z</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mtext>ASN</mml:mtext>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="italic">tanh</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">z</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mtext>BN</mml:mtext>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">sgn</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="italic">tanh</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">z</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mtext>BSN</mml:mtext>
<mml:mo>:</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">sgn</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:mi mathvariant="italic">tanh</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi mathvariant="bold">z</mml:mi>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x2a;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(2)</label>
</disp-formula>where <bold>z</bold>[<italic>t</italic> &#x2b; 1] &#x3d; <italic>W</italic>
<sup>
<italic>in</italic>
</sup>
<bold>u</bold>[<italic>t</italic> &#x2b; 1] &#x2b; <italic>W</italic>
<sup>
<italic>s</italic>
</sup>
<bold>x</bold>[<italic>t</italic>]. Here, <bold>u</bold> is the input vector, <bold>x</bold>[<italic>t</italic>] represents the reservoir state vector at the time <italic>t</italic>, <italic>a</italic> is the reservoir leaking rate (assumed to be the constant for all the neurons), <italic>b</italic> is the neuron noise scaling parameter to include stochasticity in the neuron model, <italic>r</italic>
<sub>
<italic>N</italic>
</sub> is a uniform random distribution, and <italic>W</italic>
<sup>
<italic>in</italic>
</sup> and <italic>W</italic>
<sup>
<italic>s</italic>
</sup> are the random weight matrices of input-reservoir and reservoir-reservoir connections, respectively. We use the same leaking rate across all models to ensure a fair comparison among the neuron models on an equal footing. It can be challenging to compare models that have different parameters as it can introduce biases. One of the unique features of reservoir computing is having random weight matrices (<xref ref-type="bibr" rid="B75">Tanaka et al., 2019</xref>) and we consider five different network topologies by creating five sets of <italic>W</italic>
<sup>
<italic>s</italic>
</sup> using random &#x201c;seed&#x201d; for various reservoir sizes, which makes our analysis unbiased to any particular network topology. The <italic>W</italic>
<sup>
<italic>s</italic>
</sup> elements are normalized using the spectral radius. We perform 1,000 simulations within each network topology making the total sample size 5,000 for every reservoir size within each neuron model. The output vector <bold>y</bold> is obtained as:<disp-formula id="e3">
<mml:math id="m7">
<mml:mi mathvariant="bold">y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>W</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">out</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:math>
<label>(3)</label>
</disp-formula>where <italic>W</italic>
<sup>
<italic>out</italic>
</sup> represents the reservoir-output weight matrix. We consider two different types of training methods, i.e., &#x201c;offline&#x201d; and &#x201c;online&#x201d; training. In the case of &#x201c;offline&#x201d; training, we extract the output weight matrix, <italic>W</italic>
<sup>
<italic>out</italic>
</sup> once at the end of the training cycle and use that static <italic>W</italic>
<sup>
<italic>out</italic>
</sup> for the testing cycle. In contrast, for &#x201c;online&#x201d; training, <italic>W</italic>
<sup>
<italic>out</italic>
</sup> is periodically updated throughout the testing cycle. The entire testing cycle is divided into 40 segments. The first segment uses the <italic>W</italic>
<sup>
<italic>out</italic>
</sup> extracted from the initial training cycle. We calculate a new <italic>W</italic>
<sup>
<italic>out</italic>
</sup> after the first segment of the testing cycle. Then, we update the <italic>W</italic>
<sup>
<italic>out</italic>
</sup> such that the elements are composed of 90% from the older version and 10% from the new one. The updated <italic>W</italic>
<sup>
<italic>out</italic>
</sup> is used for the second segment and the procedure keeps going on throughout the testing cycle. This stabilizes the learning at the cost of higher error rates as the learning evolution slowly evolves to a new configuration. This is akin to the successive over-relaxation methods used in many self-consistent numerical algorithms for improved convergence.</p>
</sec>
<sec sec-type="results|discussion" id="s4">
<title>4 Results and discussions</title>
<sec id="s4-1">
<title>4.1 Binary vs. analog: inference errors</title>
<p>We implement the temporal inferencing task, specifically, the time-series prediction task to test and compare the performance of different neuron models. We consider an input signal of the form <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>), which we referred to as a clean input. We use <italic>A</italic> &#x3d; 1, <italic>B</italic> &#x3d; 2, <italic>f</italic>
<sub>1</sub> &#x3d; 0.10 <italic>Hz</italic>, and <italic>f</italic>
<sub>2</sub> &#x3d; 0.02&#xa0;<italic>Hz</italic>. Although we choose the magnitude and frequency of the input arbitrarily, we further investigate other combinations of these variables (<xref ref-type="table" rid="T1">Table 1</xref>) to ensure that our analysis remains independent of them. We train the neuron models using the clean input signal and test the models on a test signal from the same generator. The neuron models learn to reproduce the test signal from its previously self-generated output. The performance of the neuron models for time-series prediction tasks is usually measured by the NMSE, which is the metric that indicates how accurately the models can predict the test signal. If <italic>y</italic>
<sub>
<italic>tar</italic>
</sub> is the target output and <italic>y</italic>
<sub>
<italic>pre</italic>
</sub> is the actual predicted output, for <italic>N</italic>
<sub>
<italic>T</italic>
</sub> time steps, we define NMSE as:<disp-formula id="e4">
<mml:math id="m8">
<mml:mi>N</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">tar</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">max</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">tar</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">min</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:munderover>
</mml:mstyle>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">tar</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">pre</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Average NMSE data extracted from the ASN and BSN models (<italic>b</italic> &#x3d; 5%) for various reservoir sizes. The form of the input signal is, <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>) &#x2b; <italic>C</italic>[<italic>rand</italic>(1, <italic>t</italic>) &#x2212; 0.5].</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="center">Model</th>
<th align="center">Reservoir size</th>
<th colspan="3" align="center">Avg. NMSE for different input signals</th>
</tr>
<tr>
<th align="left"/>
<th align="left"/>
<th align="center">{<italic>A</italic>, <italic>B</italic>, <italic>C</italic>} &#x3d; {0.5, 1.0, 0.0} {<italic>f</italic>
<sub>1</sub>, <italic>f</italic>
<sub>2</sub>} &#x3d; {0.20, 0.04} <italic>Hz</italic>
</th>
<th align="center">{<italic>A</italic>, <italic>B</italic>, <italic>C</italic>} &#x3d; {1.0, 2.0, 0.5} {<italic>f</italic>
<sub>1</sub>, <italic>f</italic>
<sub>2</sub>} &#x3d; {0.10, 0.02} <italic>Hz</italic>
</th>
<th align="center">{<italic>A</italic>, <italic>B</italic>, <italic>C</italic>} &#x3d; {1.0, 2.0, 1.5} {<italic>f</italic>
<sub>1</sub>, <italic>f</italic>
<sub>2</sub>} &#x3d; {0.10, 0.02} <italic>Hz</italic>
</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="5" align="center">ASN</td>
<td align="center">
<italic>N</italic> &#x3d; 10</td>
<td align="center">0.1729</td>
<td align="center">0.1453</td>
<td align="center">0.1501</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 20</td>
<td align="center">0.1585</td>
<td align="center">0.1199</td>
<td align="center">0.1161</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 30</td>
<td align="center">0.1183</td>
<td align="center">0.0960</td>
<td align="center">0.0984</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 40</td>
<td align="center">0.1080</td>
<td align="center">0.0775</td>
<td align="center">0.1001</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 50</td>
<td align="center">0.0791</td>
<td align="center">0.0605</td>
<td align="center">0.0816</td>
</tr>
<tr>
<td rowspan="5" align="center">BSN</td>
<td align="center">
<italic>N</italic> &#x3d; 10</td>
<td align="center">0.2510</td>
<td align="center">0.2396</td>
<td align="center">0.2546</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 20</td>
<td align="center">0.2233</td>
<td align="center">0.2102</td>
<td align="center">0.2184</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 30</td>
<td align="center">0.2103</td>
<td align="center">0.1895</td>
<td align="center">0.2028</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 40</td>
<td align="center">0.2331</td>
<td align="center">0.2156</td>
<td align="center">0.2040</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 50</td>
<td align="center">0.2329</td>
<td align="center">0.2142</td>
<td align="center">0.2173</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>
<xref ref-type="fig" rid="F2">Figures 2A,B</xref> show the NMSE for ASN and BSN, respectively for the time-series prediction task for various reservoir sizes. We generate the results using the &#x2018;offline&#x2019; training as discussed in the method section, for a clean input signal. We incorporate the stochasticity by adding 5% white noise in both neuron models (<italic>b</italic> &#x3d; 0.05). The total sample size is 5,000 for a specific reservoir size, however, it is worth mentioning that we do not get valid NMSE for all the 5,000 cases because the network fails to predict the input signal and blows up for some cases. We get <inline-formula id="inf5">
<mml:math id="m9">
<mml:mo>&#x223c;</mml:mo>
<mml:mn>90</mml:mn>
<mml:mi>%</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>%</mml:mi>
</mml:math>
</inline-formula> successful cases depending on the reservoir sizes. Only valid data points are included in <xref ref-type="fig" rid="F2">Figure 2</xref> and all the subsequent figures. We find ASN performs better than BSN for all the reservoir sizes indicated by the average NMSE (cyan dashed-dotted line). Overall the NMSE is less scattered for ASN than BSN, so is their standard deviation, (magenta dashed-dotted line) as shown in the bottom panel of <xref ref-type="fig" rid="F2">Figure 2</xref>. For ASN, we find that the average NMSE has a decreasing trend as the reservoir size increases, which indicates larger size networks can predict better. This happens because of the substantially richer dynamics and phase-space volume possible in a large network. In contrast, for BSN, the average NMSE is almost unchanged as the reservoir size increases.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Comparison of NMSE for an analog time-series prediction task between <bold>(A)</bold> ASN and <bold>(B)</bold> BSN models as a function of reservoir size with 5% stochasticity incorporated in both the neuron models for a clean input signal. The form of the clean input signal is <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>), where <italic>A</italic> &#x3d; 1, <italic>B</italic> &#x3d; 2, <italic>f</italic>
<sub>1</sub> &#x3d; 0.10 <italic>Hz</italic>, and <italic>f</italic>
<sub>2</sub> &#x3d; 0.02&#xa0;<italic>Hz</italic>. ASN performs better than BSN for the entire range of reservoir size as indicated by the average (<italic>&#x3bc;</italic>) NMSE (cyan dashed-dotted line). ASN shows a decreasing trend in NMSE as a function of reservoir size while BSN results remain almost unchanged. The NMSE data for every reservoir size is obtained from five different reservoir topologies and 1,000 simulation runs (different random &#x201c;seed&#x201d;) within each topology (total sample size is 5,000). The color bar represents the frequency of the NMSE data. Note that in some cases, our model fails to generate a meaningful NMSE as the reservoir output blows up. We get meaningful output from <inline-formula id="inf6">
<mml:math id="m10">
<mml:mo>&#x223c;</mml:mo>
<mml:mn>90</mml:mn>
<mml:mi>%</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>%</mml:mi>
</mml:math>
</inline-formula> cases depending on the reservoir sizes, and those data are plotted here and used to estimate the average NMSE. The bottom panel is the zoomed version of the top panel and the magenta dashed-dotted lines are the guide to the eye that shows the data distribution in the range of <italic>&#x3bc;</italic> &#xb1; <italic>&#x3c3;</italic>. The color codes to represent the <italic>&#x3bc;</italic> and <italic>&#x3c3;</italic> are the same for the subsequent figures henceforth.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g002.tif"/>
</fig>
<p>We vary the stochasticity incorporated in the neuron models. <xref ref-type="fig" rid="F3">Figures 3A,B</xref> show the distribution of the NMSE for different percentages of stochasticity, <italic>b</italic> for ASN and BSN models, respectively. We find that ASN performs better than its BSN counterpart throughout the ranges of <italic>b</italic> as indicated by the average NMSE. For ASN, the average NMSE shows a sub-linear trend as a function of <italic>b</italic> (<xref ref-type="fig" rid="F3">Figure 3C</xref>) for various reservoir sizes, while for BSN, the average NMSE remains unchanged (<xref ref-type="fig" rid="F3">Figure 3D</xref>). For pure analog neuron (<italic>b</italic> &#x3d; 0%), the NMSE is not much spread out, and also, for larger reservoir size, the average NMSE is smaller than the neuron model with stochasticity, however, having a neuron model with zero stochasticity is not practical. Moreover, stochasticity helps to make the system stable and reliable as discussed in the next section. Although the average NMSE increases with increasing <italic>b</italic>, we conjecture that <italic>b</italic> &#x3d; 2&#x2013;5% would be optimal.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Evolution of NMSE for different degrees of stochasticity (noise percentages) associated with the <bold>(A)</bold> ASN and <bold>(B)</bold> BSN models. ASN performs better than the BSN model for analog time-series prediction tasks throughout the ranges of the degree of stochasticity as indicated by the average NMSE shown in <bold>(C)</bold> and <bold>(D)</bold> for ASN and BSN, respectively. The characteristics of the average NMSE as a function of reservoir size, i.e., the decreasing trend for ASN while almost no change for BSN holds throughout the range of <italic>b</italic>.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g003.tif"/>
</fig>
<p>The aforementioned results are based on a clean input signal. We tested the models for distorted input as well. For the distorted case, we add a white noise in the clean input and the form of the distorted input signal is <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>) &#x2b; <italic>C</italic>[<italic>rand</italic>(1, <italic>t</italic>) &#x2212; 0.5]. The white noise is uniformly distributed for all <italic>t</italic> values, both in the positive and negative half of the sinusoidal input. The degree of noise has been chosen arbitrarily. Again, we show various degrees of noise (<xref ref-type="table" rid="T1">Table 1</xref>) to make the analysis independent of a specific value of the noise margin. The NMSE results shown in <xref ref-type="fig" rid="F4">Figures 4A,B</xref> are calculated using <italic>A</italic> &#x3d; 1, <italic>B</italic> &#x3d; 2, <italic>C</italic> &#x3d; 1, <italic>f</italic>
<sub>1</sub> &#x3d; 0.10 <italic>Hz</italic>, and <italic>f</italic>
<sub>2</sub> &#x3d; 0.02&#xa0;<italic>Hz</italic>. We find a better performance for ASN than that of BSN for the distorted input as well. It appears that for ASN, with a distorted input signal, the spectrum of NMSE is smaller, which reduces the standard deviation. The characteristics of the average NMSE are similar for the clean and distorted input for both ASN (<xref ref-type="fig" rid="F4">Figure 4C</xref>) and BSN (<xref ref-type="fig" rid="F4">Figure 4D</xref>) models. However, the average NMSE is slightly lower for the distorted input for both types of neuron models. Furthermore, we use different combinations of signal magnitude, frequency, and the weight of noise in the input signal. We list the average NMSE for various reservoir sizes in <xref ref-type="table" rid="T1">Table 1</xref>. Additionally, we explore other input functions beyond the simple sinusoidal input used in the aforementioned results. In particular, we use a sinusoidal with higher harmonic terms, a sawtooth input function, and a square input function. The used form of the functions are <inline-formula id="inf7">
<mml:math id="m11">
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>4</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mo movablelimits="false" form="prefix">&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>15</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mi>sin</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>n</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula> (odd <italic>n</italic>), <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A sawtooth</italic>(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B sawtooth</italic>(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>), <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A square</italic>(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B square</italic>(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>), respectively. In the case of sinusoidal with higher harmonic terms, we use the fundamental frequency <italic>f</italic>
<sub>1</sub> &#x3d; 0.10&#xa0;<italic>Hz</italic>. For the sawtooth and square inputs, the magnitude and frequency remain the same as of the original sinusoidal clean input. The results are summarized in <xref ref-type="fig" rid="F5">Figure 5</xref>, where the label Input 1, Input 2, Input 3, and Input 4 correspond to the sinusoidal clean input, sinusoidal with higher harmonic terms, sawtooth, and square input functions, respectively. <xref ref-type="fig" rid="F5">Figure 5</xref> shows that for all the different inputs, ANS performance is better than BSN in terms of NMSE. Comparing all the cases, we conjecture that ASN performs better than BSN for the temporal inferencing task.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Evolution of NMSE for different degrees of stochasticity for <bold>(A)</bold> ASN and <bold>(B)</bold> BSN models for a distorted input signal. Random white noise is added to the clean input signal to introduce distortion and the form of the distorted signal is <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>) &#x2b; <italic>C</italic>[<italic>rand</italic>(1, <italic>t</italic>) &#x2212; 0.5], where <italic>A</italic> &#x3d; 1, <italic>B</italic> &#x3d; 2, <italic>C</italic> &#x3d; 1, <italic>f</italic>
<sub>1</sub> &#x3d; 0.10 <italic>Hz</italic>, and <italic>f</italic>
<sub>2</sub> &#x3d; 0.02&#xa0;<italic>Hz</italic>. ASN performs better than BSN for the distorted input, as indicated by the average NMSE shown in <bold>(C)</bold> and <bold>(D)</bold> for ASN and BSN, respectively, which dictates the robustness of the ASN model in terms of performance irrespective of the input signals.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g004.tif"/>
</fig>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Comparison of NMSE for time-series prediction task between ASN and BSN models for various input functions for a reservoir size of <bold>(A)</bold> N &#x3d; 20 and <bold>(B)</bold> N &#x3d; 30. The degree of stochasticity incorporated in both neuron models is 5%. The label Input 1, Input 2, Input 3, and Input 4 correspond to the sinusoidal clean input, sinusoidal with higher harmonic terms, sawtooth, and square input functions, respectively. ANS performance is better than BSN in terms of NMSE for different input functions.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g005.tif"/>
</fig>
</sec>
<sec id="s4-2">
<title>4.2 Deterministic vs. stochastic: generalizability and robustness</title>
<p>One important aspect of any NN implementation is the generalizability and robustness of the learning. A model trained to a very specific data distribution will fail when it is running on a distribution that differs from the trained model. This is particularly true if a generative model guides its own subsequent learning, which is the example we have used in our online learning scenario. In this case, the underlying distribution is varied slowly while the network evolves its internal generative model to match the output of distribution, i.e., it works as a dynamically evolving temporal auto-encoder.</p>
<p>The stochasticity of the neuron response will add errors to the generated output as we see in the previous cases, however, we find that after a few iterations of the online learning cycle, the ability of this online learning blows up, i.e., the linear regression-based learning cannot keep up with the test distribution evolution and the error builds up (we call it blowup) and the whole training needs to be fully reset or reinitiated and cannot merely evolve from previous learning. This blowup occurs 100% for deterministic analog neurons, and the rate reduces as the degree of stochasticity increases (parameter <italic>b</italic>).</p>
<p>This is shown in <xref ref-type="table" rid="T2">Table 2</xref> for various input functions. It should be noted that at very high stochasticity while the training is more robust, the errors will be high, therefore a minimal amount of stochasticity is useful as a trade-off between these ends. The degree to which the trade-off can be performed depends on the application scenario. If full retraining is too expensive or not acceptable, then a relatively higher degree of stochasticity in the neuron is necessary, but if it is cheap and acceptable to retrain the whole network frequently, a near-deterministic neuron will be better suited to meet the requirements.</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Robustness vs. accuracy trade-off (<italic>N</italic> &#x3d; 20). The label Input 1, Input 2, Input 3, and Input 4 correspond to the sinusoidal clean input, sinusoidal with higher harmonic terms, sawtooth, and square input functions described earlier, respectively.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="center">Model</th>
<th align="center">b (%)</th>
<th colspan="4" align="center">Blowup (%)</th>
<th colspan="4" align="center">Avg. NMSE</th>
</tr>
<tr>
<th align="left"/>
<th align="left"/>
<th align="center">Input 1</th>
<th align="center">Input 2</th>
<th align="center">Input 3</th>
<th align="center">Input 4</th>
<th align="center">Input 1</th>
<th align="center">Input 2</th>
<th align="center">Input 3</th>
<th align="center">Input 4</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="center">AN</td>
<td align="center">0</td>
<td align="center">100</td>
<td align="center">100</td>
<td align="center">100</td>
<td align="center">100</td>
<td align="center">&#x2212;</td>
<td align="center">&#x2212;</td>
<td align="center">&#x2212;</td>
<td align="center">&#x2212;</td>
</tr>
<tr>
<td rowspan="7" align="center">ASN</td>
<td align="center">1</td>
<td align="center">74.7</td>
<td align="center">81.3</td>
<td align="center">98.5</td>
<td align="center">98.6</td>
<td align="center">0.3175</td>
<td align="center">0.2759</td>
<td align="center">0.4947</td>
<td align="center">0.5475</td>
</tr>
<tr>
<td align="center">2</td>
<td align="center">66.4</td>
<td align="center">79.3</td>
<td align="center">92.0</td>
<td align="center">92.9</td>
<td align="center">0.2921</td>
<td align="center">0.3225</td>
<td align="center">0.3947</td>
<td align="center">0.5537</td>
</tr>
<tr>
<td align="center">3</td>
<td align="center">60.7</td>
<td align="center">78.7</td>
<td align="center">85.9</td>
<td align="center">88.9</td>
<td align="center">0.2854</td>
<td align="center">0.3301</td>
<td align="center">0.3744</td>
<td align="center">0.5591</td>
</tr>
<tr>
<td align="center">4</td>
<td align="center">56.2</td>
<td align="center">77.0</td>
<td align="center">81.0</td>
<td align="center">84.3</td>
<td align="center">0.2782</td>
<td align="center">0.3534</td>
<td align="center">0.3572</td>
<td align="center">0.5515</td>
</tr>
<tr>
<td align="center">5</td>
<td align="center">53.9</td>
<td align="center">76.3</td>
<td align="center">76.4</td>
<td align="center">80.7</td>
<td align="center">0.2778</td>
<td align="center">0.3597</td>
<td align="center">0.3636</td>
<td align="center">0.5358</td>
</tr>
<tr>
<td align="center">10</td>
<td align="center">49.1</td>
<td align="center">71.6</td>
<td align="center">66.5</td>
<td align="center">71.4</td>
<td align="center">0.2849</td>
<td align="center">0.3903</td>
<td align="center">0.3398</td>
<td align="center">0.5316</td>
</tr>
<tr>
<td align="center">15</td>
<td align="center">48.8</td>
<td align="center">69.3</td>
<td align="center">59.7</td>
<td align="center">67.3</td>
<td align="center">0.3019</td>
<td align="center">0.4266</td>
<td align="center">0.3557</td>
<td align="center">0.5412</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-3">
<title>4.3 Synaptic weights dynamic range: hardware implementability</title>
<p>One critical aspect of hardware implementability of neuromorphic computing is the ability to modulate the weights and the dynamic range or the order of magnitude to which weights may be distributed. It can be shown that a 30-bit weight resolution represents about a 100&#xa0;dB dynamic range. While such ranges might be comparatively easily implemented in software, it is significantly difficult to implement such a high dynamic range in physical hardware. While some memristive materials may show multi-steps, it is hard to achieve much more than one order of magnitude change in the weights. Please note that we do not mean the change in the physical characteristics (typically the resistance) used to represent the weights themselves, but rather the number of steps that the weight can be implemented as.</p>
<p>We compare the dynamic range of the learned synaptic weights that need to be implemented in the reservoir networks (in the trained output readout layer) for various input functions and find that the ASN networks show the smallest dynamic range for all the cases (<xref ref-type="fig" rid="F6">Figure 6</xref>) and suggest the easiest path to hardware implementability of physical neuromorphic computing. It is important to note that the hardware implementation of neuromorphic computing is an open question and the dynamic range of the synaptic weights is one of the important factors when it comes to the physical deployment of neuromorphic computing as discussed above. ASN networks show better performance in terms of the dynamic range of learned synaptic weights compared to other models, which suggests that networks that employed ASN models might have better hardware implementability; however, it requires more analysis in terms of energy cost, scalability, and reconfigurability, which we leave as a future study.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Dynamic range of the learned synaptic weights, <italic>W</italic>
<sub>
<italic>out</italic>
</sub> for all the neuron models (<italic>N</italic> &#x3d;20). 5% stochasticity is considered in the ASN and BSN models. ASN model shows the smallest dynamic range that leads to better hardware implementability. The label Input 1, Input 2, Input 3, and Input 4 correspond to the sinusoidal clean input, sinusoidal with higher harmonic terms, sawtooth, and square input functions, respectively.</p>
</caption>
<graphic xlink:href="fnano-05-1146852-g006.tif"/>
</fig>
</sec>
<sec id="s4-4">
<title>4.4 Memory capacity</title>
<p>The performance of reservoir computing is often described by memory capacity (MC) (<xref ref-type="bibr" rid="B36">Jaeger, 2002</xref>; <xref ref-type="bibr" rid="B79">Verstraeten et al., 2007</xref>; <xref ref-type="bibr" rid="B34">Inubushi and Yoshimura, 2017</xref>). It measures how much information from previous input is present in the current output state of the reservoir. The task is to reproduce the delayed version of the input signal. For a certain time delay <italic>k</italic>, we measure how well the current state of the reservoir <italic>y</italic>
<sub>
<italic>k</italic>
</sub>(<italic>t</italic>) can recall the input <italic>u</italic> at time <italic>t</italic> &#x2212; <italic>k</italic>. The linear MC is defined as:<disp-formula id="e5">
<mml:math id="m12">
<mml:mi>M</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:munder>
</mml:mstyle>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mi>u</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msup>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:math>
<label>(5)</label>
</disp-formula>where <italic>u</italic>(<italic>t</italic> &#x2212; <italic>k</italic>) is the delayed version of the input signal, which is the target output, and <italic>y</italic>
<sub>
<italic>k</italic>
</sub>(<italic>t</italic>) is the output of the reservoir unit trained on the delay <italic>k</italic>. <italic>cov</italic> and <italic>&#x3c3;</italic>
<sup>2</sup> denote <italic>covariance</italic> and <italic>variance</italic>, respectively.</p>
<p>
<xref ref-type="table" rid="T3">Table 3</xref> shows the linear MC for different neuron models for the distorted input <italic>u</italic>(<italic>t</italic>) &#x3d; <italic>A</italic>&#x2009;cos(2<italic>&#x3c0;f</italic>
<sub>1</sub>
<italic>t</italic>) &#x2b; <italic>B</italic>&#x2009;sin(2<italic>&#x3c0;f</italic>
<sub>2</sub>
<italic>t</italic>) &#x2b; <italic>C</italic>[<italic>rand</italic>(1, <italic>t</italic>) &#x2212; 0.5], where <italic>A</italic> &#x3d; 1, <italic>B</italic> &#x3d; 2, <italic>C</italic> &#x3d; 1, <italic>f</italic>
<sub>1</sub> &#x3d; 0.10 <italic>Hz</italic>, and <italic>f</italic>
<sub>2</sub> &#x3d; 0.02&#xa0;<italic>Hz</italic>. We consider the delayed signal over 1 to 50 timesteps, meaning <italic>k</italic> spans from 1 to 50. We find that Analog neurons have significantly larger linear MC than binary neurons. For analog neurons, linear MC increases as the reservoir size increases, which is expected because a larger dynamical system can retain more information from the past (<xref ref-type="bibr" rid="B36">Jaeger, 2002</xref>). Additionally, including stochasticity in the analog neuron model degrades the linear MC as reported previously (<xref ref-type="bibr" rid="B36">Jaeger, 2002</xref>). In contrast, binary neurons fail to produce substantial differences in linear MC when reservoir size is varied and stochasticity is included in the model.</p>
<table-wrap id="T3" position="float">
<label>TABLE 3</label>
<caption>
<p>Linear memory capacity (MC) for different neuron models.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="center">Model</th>
<th align="center">Reservoir size</th>
<th colspan="2" align="center">MC</th>
</tr>
<tr>
<th align="left"/>
<th align="left"/>
<th align="center">
<italic>b</italic> &#x3d; 0%</th>
<th align="center">
<italic>b</italic> &#x3d; 5%</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="2" align="center">Analog</td>
<td align="center">
<italic>N</italic> &#x3d; 40</td>
<td align="center">39.0</td>
<td align="center">32.5</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 50</td>
<td align="center">45.2</td>
<td align="center">36.2</td>
</tr>
<tr>
<td rowspan="2" align="center">Binary</td>
<td align="center">
<italic>N</italic> &#x3d; 40</td>
<td align="center">2.7</td>
<td align="center">2.8</td>
</tr>
<tr>
<td align="center">
<italic>N</italic> &#x3d; 50</td>
<td align="center">3.4</td>
<td align="center">3.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Besides the previously mentioned properties, physical neuromorphic computing exhibits chaos or edge-of-chaos property, which has been shown to enhance the performance of complex learning tasks (<xref ref-type="bibr" rid="B43">Kumar et al., 2017</xref>; <xref ref-type="bibr" rid="B30">Hochstetter et al., 2021</xref>; <xref ref-type="bibr" rid="B55">Nishioka et al., 2022</xref>). The edge-of-chaos property refers to the transition point between ordered and chaotic behavior in a system. In the discussed models, it may be possible to achieve the edge-of-chaos state by introducing increasing amounts of noise to the models, resulting in chaotic behavior that could potentially improve network performance. We find that with an increased degree of stochasticity in the neuron models, the learning process becomes more robust, which could be a signature of the performance improvement by including the edge-of-chaos property. However, the prediction accuracy and the linear MC tend to decrease with a higher degree of stochasticity, so the trade-off needs to be considered. It should be noted that a more comprehensive analysis is required to fully understand the impact of edge-of-chaos behavior on the discussed neuron models, which is beyond the scope of this paper and will be explored in future studies.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>5 Conclusion</title>
<p>In summary, we studied different neuron models for the analog signal inferencing (time-series prediction) task in the context of reservoir computing and evaluate their performances for various input functions. We show that the performance metrics are better for ASN than BSN for both clean and distorted input signals. We find that the increasing degree of stochasticity makes the models more robust, however, decreases the prediction accuracy. This introduces a trade-off between accuracy and robustness depending on the application requirements and specifications. Furthermore, the ASN model turns out to be the suitable one for hardware implementation, which attributes to the smallest dynamics range of the learned synaptic weights, although other aspects, i.e., energy requirement, scalability, and reconfigurability need to be assessed. Additionally, we estimate the linear memory capacity for different neuron models, which suggests that analog neurons have a higher ability to reconstruct the past input signal from the present reservoir state. These findings may provide critical insights for choosing suitable neuron models for real-time signal-processing tasks and pave the way toward building energy-efficient neuromorphic computing platforms.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s7">
<title>Author contributions</title>
<p>SG, MM, and AG conceived the idea. SG wrote the base simulation codes and MM modified and parallelized the base simulation codes for HPC, performed all the simulations, and generated the results. All authors analyzed the results, contributed to the manuscript, and approved the submitted version.</p>
</sec>
<sec id="s8">
<title>Funding</title>
<p>This work was supported by DRS Technology and in part by the NSF I/UCRC on Multi-functional Integrated System Technology (MIST) Center; IIP-1439644, IIP-1439680, IIP-1738752, IIP-1939009, IIP-1939050, and IIP-1939012.</p>
</sec>
<ack>
<p>We thank Kerem Yunus Camsari, Marco Lopez, Tony Ragucci, and Faiyaz Elahi Mullick for useful discussions. All the calculations are done using the computational resources from High-Performance Computing systems at the University of Virginia (Rivanna) and the Extreme Science and Engineering Discovery Environment (XSEDE).</p>
</ack>
<sec sec-type="COI-statement" id="s9">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abiodun</surname>
<given-names>O. I.</given-names>
</name>
<name>
<surname>Jantan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Omolara</surname>
<given-names>A. E.</given-names>
</name>
<name>
<surname>Dada</surname>
<given-names>K. V.</given-names>
</name>
<name>
<surname>Mohamed</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Arshad</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>State-of-the-art in artificial neural network applications: A survey</article-title>. <source>Heliyon</source> <volume>4</volume> (<issue>11</issue>), <fpage>e00938</fpage>. <pub-id pub-id-type="doi">10.1016/j.heliyon.2018.e00938</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abreu Araujo</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Riou</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Torrejon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tsunegi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Querlioz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yakushiji</surname>
<given-names>K.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Role of non-linear data processing on speech recognition task in the framework of reservoir computing</article-title>. <source>Sci. Rep.</source> <volume>10</volume>, <fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-56991-x</pub-id>
</citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adam</surname>
<given-names>G. C.</given-names>
</name>
<name>
<surname>Hoskins</surname>
<given-names>B. D.</given-names>
</name>
<name>
<surname>Prezioso</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Merrikh-Bayat</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Chakrabarti</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Strukov</surname>
<given-names>D. B.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>3-D memristor crossbars for analog and neuromorphic computing applications</article-title>. <source>IEEE Trans. Electron Devices</source> <volume>64</volume> (<issue>1</issue>), <fpage>312</fpage>&#x2013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2016.2630925</pub-id>
</citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baldassi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Gerace</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kappen</surname>
<given-names>H. J.</given-names>
</name>
<name>
<surname>Lucibello</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Saglietti</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Tartaglione</surname>
<given-names>E.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Role of synaptic stochasticity in training low-precision neural networks</article-title>. <source>Phys. Rev. Lett.</source> <volume>120</volume> (<issue>26</issue>), <fpage>268103</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.120.268103</pub-id>
</citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barna</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Kaski</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>1990</year>). <article-title>Stochastic vs. Deterministic neural networks for pattern recognition</article-title>. <source>Phys. Scr.</source> <volume>T33</volume>, <fpage>110</fpage>&#x2013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1088/0031-8949/1990/T33/019</pub-id>
</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benjamin</surname>
<given-names>B. V.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>McQuinn</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Choudhary</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chandrasekaran</surname>
<given-names>A. R.</given-names>
</name>
<name>
<surname>Bussat</surname>
<given-names>J. M.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations</article-title>. <source>Proc. IEEE</source> <volume>102</volume> (<issue>5</issue>), <fpage>699</fpage>&#x2013;<lpage>716</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2014.2313565</pub-id>
</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bick</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Laing</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Martens</surname>
<given-names>E. A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: A review</article-title>. <source>J. Math. Neurosci.</source> <volume>10</volume> (<issue>1</issue>), <fpage>9</fpage>&#x2013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1186/s13408-020-00086-9</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<collab>Big data</collab> (<year>2018</year>). <article-title>Big data needs a hardware revolution</article-title>. <source>Nature</source> <volume>554</volume>, <fpage>145</fpage>&#x2013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1038/d41586-018-01683-1</pub-id>
</citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brigner</surname>
<given-names>W. H.</given-names>
</name>
<name>
<surname>Hassan</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Bennett</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Garcia-Sanchez</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Cui</surname>
<given-names>C.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>Domain wall leaky integrate-and-fire neurons with shape-based configurable activation functions</article-title>. <source>IEEE Trans. Electron Devices</source> <volume>69</volume> (<issue>5</issue>), <fpage>2353</fpage>&#x2013;<lpage>2359</lpage>. <pub-id pub-id-type="doi">10.1109/TED.2022.3159508</pub-id>
</citation>
</ref>
<ref id="B10">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Brown</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>Chakma</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Musabbir Adnan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hasan Sakib</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rose</surname>
<given-names>G. S.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Stochasticity in neuromorphic computing: Evaluating randomness for improved performance</article-title>,&#x201d; in <conf-name>2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS)</conf-name>, <conf-loc>Genoa, Italy</conf-loc>, <conf-date>27-29 November 2019</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>454</fpage>&#x2013;<lpage>457</lpage>. <pub-id pub-id-type="doi">10.1109/ICECS46596.2019.8965057</pub-id>
</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burkitt</surname>
<given-names>A. N.</given-names>
</name>
</person-group> (<year>2006</year>). <article-title>A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input</article-title>. <source>Biol. Cybern.</source> <volume>95</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-006-0068-6</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Camsari</surname>
<given-names>K. Y.</given-names>
</name>
<name>
<surname>Faria</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sutton</surname>
<given-names>B. M.</given-names>
</name>
<name>
<surname>Datta</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2017a</year>). <article-title>Stochastic <italic>p</italic>-bits for invertible logic</article-title>. <source>Phys. Rev. X</source> <volume>7</volume> (<issue>3</issue>), <fpage>031014</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevX.7.031014</pub-id>
</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Camsari</surname>
<given-names>K. Y.</given-names>
</name>
<name>
<surname>Salahuddin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Datta</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2017b</year>). <article-title>Implementing p-bits with embedded MTJ</article-title>. <source>IEEE Electron Device Lett.</source> <volume>38</volume> (<issue>12</issue>), <fpage>1767</fpage>&#x2013;<lpage>1770</lpage>. <comment>ISSN 1558-0563</comment>. <pub-id pub-id-type="doi">10.1109/LED.2017.2768321</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Christensen</surname>
<given-names>D. V.</given-names>
</name>
<name>
<surname>Dittmann</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Linares-Barranco</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Sebastian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gallo</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Redaelli</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>2022 roadmap on neuromorphic computing and engineering</article-title>. <source>Neuromorph. Comput. Eng.</source> <volume>2</volume> (<issue>2</issue>), <fpage>022501</fpage>. <pub-id pub-id-type="doi">10.1088/2634-4386/ac4a83</pub-id>
</citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cook</surname>
<given-names>R. L.</given-names>
</name>
</person-group> (<year>1986</year>). <article-title>Stochastic sampling in computer graphics</article-title>. <source>ACM Trans. Graph.</source> <volume>5</volume> (<issue>1</issue>), <fpage>51</fpage>&#x2013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1145/7529.8927</pub-id>
</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davidson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Furber</surname>
<given-names>S. B.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Comparison of artificial and spiking neural networks on digital hardware</article-title>. <source>Front. Neurosci.</source> <volume>15</volume>, <fpage>651141</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2021.651141</pub-id>
</citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davies</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Srinivasa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>T. H.</given-names>
</name>
<name>
<surname>Chinya</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Choday</surname>
<given-names>S. H.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Loihi: A neuromorphic manycore processor with on-chip learning</article-title>. <source>IEEE Micro</source> <volume>38</volume> (<issue>1</issue>), <fpage>82</fpage>&#x2013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1109/MM.2018.112130359</pub-id>
</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duan</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Jing</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zou</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>T.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Spiking neurons with spatiotemporal dynamics and gain modulation for monolithically integrated memristive neural networks</article-title>. <source>Nat. Commun.</source> <volume>11</volume>, <fpage>3399</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-020-17215-3</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Engedy</surname>
<given-names>Istv&#xe1;n</given-names>
</name>
<name>
<surname>Horv&#xe1;th</surname>
<given-names>G&#xe1;bor</given-names>
</name>
</person-group> (<year>2012</year>). &#x201c;<article-title>Optimal control with reinforcement learning using reservoir computing and Gaussian mixture</article-title>,&#x201d; in <conf-name>2012 IEEE International Instrumentation and Measurement Technology Conference Proceedings</conf-name>, <conf-loc>Graz, Austria</conf-loc>, <conf-date>13-16 May 2012</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1062</fpage>&#x2013;<lpage>1066</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faisal</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Selen</surname>
<given-names>L. P. J.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Noise in the nervous system</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>9</volume> (<issue>4</issue>), <fpage>292</fpage>&#x2013;<lpage>303</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2258</pub-id>
</citation>
</ref>
<ref id="B21">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Farquhar</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Gordon</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hasler</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2006</year>). &#x201c;<article-title>A field programmable neural array</article-title>,&#x201d; in <conf-name>2006 IEEE International Symposium on Circuits and Systems</conf-name>, <conf-loc>Kos, Greece</conf-loc>, <conf-date>21-24 May 2006</conf-date> (<publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/ISCAS.2006.1693534</pub-id>
</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Furber</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Galluppi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Temple</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Plana</surname>
<given-names>L. A.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>The SpiNNaker project</article-title>. <source>Proc. IEEE</source> <volume>102</volume> (<issue>5</issue>), <fpage>652</fpage>&#x2013;<lpage>665</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2014.2304638</pub-id>
</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ganguly</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Camsari</surname>
<given-names>K. Y.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>A. W.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Analog signal processing using stochastic magnets</article-title>. <source>IEEE Access</source> <volume>9</volume>, <fpage>92640</fpage>&#x2013;<lpage>92650</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2021.3075839</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ganguly</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Stan</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>A. W.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Hardware based spatio-temporal neural processing backend for imaging sensors: Towards a smart camera</article-title>,&#x201d; in <source>Image sensing technologies: Materials, devices, systems, and applications V</source> (<publisher-loc>Washington USA</publisher-loc>: <publisher-name>SPIE</publisher-name>), <fpage>135</fpage>&#x2013;<lpage>145</lpage>. <pub-id pub-id-type="doi">10.1117/12.2305137</pub-id>
</citation>
</ref>
<ref id="B25">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Goldberger</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ben-Reuven</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2017</year>). &#x201c;<article-title>Training deep neural-networks using a noise adaptation layer</article-title>,&#x201d; in <conf-name>International Conference on Learning Representations</conf-name>, <conf-loc>Toulon, France</conf-loc>, <conf-date>April 24 - 26, 2017</conf-date>.</citation>
</ref>
<ref id="B26">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Goodfellow</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Courville</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Deep learning</source>. <publisher-loc>Massachusetts, US</publisher-loc>: <publisher-name>MIT press</publisher-name>.</citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grollier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Querlioz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Camsari</surname>
<given-names>K. Y.</given-names>
</name>
<name>
<surname>Everschor-Sitte</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Fukami</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stiles</surname>
<given-names>M. D.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Neuromorphic spintronics</article-title>. <source>Nat. Electron.</source> <volume>3</volume> (<issue>7</issue>), <fpage>360</fpage>&#x2013;<lpage>370</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-019-0360-9</pub-id>
</citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Xiang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Integrated neuromorphic photonics: Synapses, neurons, and neural networks</article-title>. <source>Adv. Photonics Res.</source> <volume>2</volume> (<issue>6</issue>), <fpage>2000212</fpage>. <pub-id pub-id-type="doi">10.1002/adpr.202000212</pub-id>
</citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harmon</surname>
<given-names>L. D.</given-names>
</name>
</person-group> (<year>1959</year>). <article-title>Artificial neuron</article-title>. <source>Science</source> <volume>129</volume> (<issue>3354</issue>), <fpage>962</fpage>&#x2013;<lpage>963</lpage>. <pub-id pub-id-type="doi">10.1126/science.129.3354.962</pub-id>
</citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hochstetter</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Loeffler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Diaz-Alvarez</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nakayama</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kuncic</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Avalanches and edge-of-chaos learning in neuromorphic nanowire networks</article-title>. <source>Nat. Commun.</source> <volume>12</volume> (<issue>4008</issue>), <fpage>4008</fpage>&#x2013;<lpage>4013</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-021-24260-z</pub-id>
</citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Rose</surname>
<given-names>G. S.</given-names>
</name>
<name>
<surname>Linderman</surname>
<given-names>R. W.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Memristor crossbar-based neuromorphic computing system: A case study</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>25</volume> (<issue>10</issue>), <fpage>1864</fpage>&#x2013;<lpage>1878</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2013.2296777</pub-id>
</citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>C. W.</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Courville</surname>
<given-names>A. C.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A variational perspective on diffusion-based generative models and score matching</article-title>. <source>Adv. Neural Inf. Process. Syst.</source> <volume>34</volume>, <fpage>22863</fpage>&#x2013;<lpage>22876</lpage>.</citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Innocenti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Di Marco</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tesi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Forti</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Memristor circuits for simulating neuron spiking and burst phenomena</article-title>. <source>Front. Neurosci.</source> <volume>15</volume>, <fpage>681035</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2021.681035</pub-id>
</citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Inubushi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Yoshimura</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Reservoir computing beyond memory-nonlinearity trade-off</article-title>. <source>Sci. Rep.</source> <volume>7</volume>, <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-017-10257-6</pub-id>
</citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jadaun</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Cui</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Incorvia</surname>
<given-names>J. A. C.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Adaptive cognition implemented with a context-aware and flexible neuron for next-generation artificial intelligence</article-title>. <source>PNAS Nexus</source> <volume>1</volume> (<issue>5</issue>), <fpage>pgac206</fpage>. <pub-id pub-id-type="doi">10.1093/pnasnexus/pgac206</pub-id>
</citation>
</ref>
<ref id="B36">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Jaeger</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2002</year>). <source>Short term memory in echo state networks</source>. <comment>gmd-report 152</comment>. <publisher-loc>Sankt Augustin</publisher-loc>: <publisher-name>GMD-German National Research Institute for Computer Science</publisher-name>.</citation>
</ref>
<ref id="B37">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jalalvand</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Van Wallendael</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Walle</surname>
<given-names>R. V. D</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Real-time reservoir computing network-based systems for detection tasks on visual contents</article-title>,&#x201d; in <conf-name>2015 7th International Conference on Computational Intelligence, Communication Systems and Networks</conf-name>, <conf-loc>Riga, Latvia</conf-loc>, <conf-date>03-05 June 2015</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>146</fpage>&#x2013;<lpage>151</lpage>. <pub-id pub-id-type="doi">10.1109/CICSyN.2015.35</pub-id>
</citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jim</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Giles</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Horne</surname>
<given-names>B. G.</given-names>
</name>
</person-group> (<year>1996</year>). <article-title>An analysis of noise in recurrent neural networks: Convergence and generalization</article-title>. <source>IEEE Trans. Neural Netw.</source> <volume>7</volume> (<issue>6</issue>), <fpage>1424</fpage>&#x2013;<lpage>1438</lpage>. <pub-id pub-id-type="doi">10.1109/72.548170</pub-id>
</citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jo</surname>
<given-names>S. H.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ebong</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bhadviya</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Mazumder</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Nanoscale memristor device as synapse in neuromorphic systems</article-title>. <source>Nano Lett.</source> <volume>10</volume> (<issue>4</issue>), <fpage>1297</fpage>&#x2013;<lpage>1301</lpage>. <pub-id pub-id-type="doi">10.1021/nl904092h</pub-id>
</citation>
</ref>
<ref id="B40">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kandel</surname>
<given-names>E. R.</given-names>
</name>
<name>
<surname>Schwartz</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Jessell</surname>
<given-names>T. M.</given-names>
</name>
<name>
<surname>Siegelbaum</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hudspeth</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Mack</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2000</year>). <source>Principles of neural science</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>McGraw-Hill</publisher-name>.</citation>
</ref>
<ref id="B41">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kato</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tanaka</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Nakane</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hirose</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2022</year>). &#x201c;<article-title>Proposal of reconstructive reservoir computing to detect anomaly in time-series signals</article-title>,&#x201d; in <conf-name>2022 International Joint Conference on Neural Networks (IJCNN)</conf-name>, <conf-loc>Padua, Italy</conf-loc>, <conf-date>18-23 July 2022</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN55064.2022.9892805</pub-id>
</citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kireev</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Patrick Xiao</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bennett</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Akinwande</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing</article-title>. <source>Nat. Commun.</source> <volume>13</volume>, <fpage>4386</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-32078-6</pub-id>
</citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Strachan</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Stanley Williams</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing</article-title>. <source>Nature</source> <volume>548</volume> (<issue>7667</issue>), <fpage>318</fpage>&#x2013;<lpage>321</lpage>. <pub-id pub-id-type="doi">10.1038/nature23307</pub-id>
</citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leonard</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Alamdar</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jin</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Cui</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Akinola</surname>
<given-names>O. G.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>Shape&#x2010;dependent multi&#x2010;weight magnetic artificial synapses for neuromorphic computing</article-title>. <source>Adv. Electron. Mat.</source> <volume>8</volume> (<issue>12</issue>), <fpage>2200563</fpage>. <pub-id pub-id-type="doi">10.1002/aelm.202200563</pub-id>
</citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Chaotic time series prediction based on a novel robust echo state network</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>23</volume> (<issue>5</issue>), <fpage>787</fpage>&#x2013;<lpage>799</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2012.2188414</pub-id>
</citation>
</ref>
<ref id="B46">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Yan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>Z.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>A tantalum disulfide charge-density-wave stochastic artificial neuron for emulating neural statistical properties</article-title>. <source>Nano Lett.</source> <volume>21</volume> (<issue>8</issue>), <fpage>3465</fpage>&#x2013;<lpage>3472</lpage>. <pub-id pub-id-type="doi">10.1021/acs.nanolett.1c00108</pub-id>
</citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Locatelli</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Cros</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Grollier</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Spin-torque building blocks</article-title>. <source>Nat. Mat.</source> <volume>13</volume> (<issue>1</issue>), <fpage>11</fpage>&#x2013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1038/nmat3823</pub-id>
</citation>
</ref>
<ref id="B48">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Luko&#x161;evi&#x10d;ius</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2012</year>). &#x201c;<article-title>A practical guide to applying echo state networks</article-title>,&#x201d; in <source>Neural networks: Tricks of the trade</source>. <edition>Second Edition</edition> (<publisher-loc>Berlin, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>659</fpage>&#x2013;<lpage>686</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-35289-8_36</pub-id>
</citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lv</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>Z.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>Stochastic artificial synapses based on nanoscale magnetic tunnel junction for neuromorphic applications</article-title>. <source>Appl. Phys. Lett.</source> <volume>121</volume> (<issue>23</issue>), <fpage>232406</fpage>. <pub-id pub-id-type="doi">10.1063/5.0126392</pub-id>
</citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Markovi&#x107;</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mizrahi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Querlioz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Grollier</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Physics for neuromorphic computing</article-title>. <source>Nat. Rev. Phys.</source> <volume>2</volume> (<issue>9</issue>), <fpage>499</fpage>&#x2013;<lpage>510</lpage>. <comment>ISSN 2522-5820</comment>. <pub-id pub-id-type="doi">10.1038/s42254-020-0208-2</pub-id>
</citation>
</ref>
<ref id="B51">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mead</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>1990</year>). <article-title>Neuromorphic electronic systems</article-title>. <source>Proc. IEEE</source> <volume>78</volume> (<issue>10</issue>), <fpage>1629</fpage>&#x2013;<lpage>1636</lpage>. <pub-id pub-id-type="doi">10.1109/5.58356</pub-id>
</citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehonic</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>A. J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Emulating the electrical activity of the neuron using a silicon oxide RRAM cell</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>, <fpage>57</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2016.00057</pub-id>
</citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Merolla</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Arthur</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>Alvarez-Icaza</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cassidy</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Sawada</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Akopyan</surname>
<given-names>F.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>A million spiking-neuron integrated circuit with a scalable communication network and interface</article-title>. <source>Science</source> <volume>345</volume> (<issue>6197</issue>), <fpage>668</fpage>&#x2013;<lpage>673</lpage>. <pub-id pub-id-type="doi">10.1126/science.1254642</pub-id>
</citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moon</surname>
<given-names>John</given-names>
</name>
<name>
<surname>Wen</surname>
<given-names>Ma</given-names>
</name>
<name>
<surname>Shin</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Cai</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>S. H.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Temporal data classification and forecasting using a memristor-based reservoir computing system</article-title>. <source>Nat. Electron.</source> <volume>2</volume> (<issue>10</issue>), <fpage>480</fpage>&#x2013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-019-0313-3</pub-id>
</citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nishioka</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Tsuchiya</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Namiki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Takayanagi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Imura</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Koide</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2022</year>). <article-title>Edge-of-chaos learning achieved by ion-electron&#x2013;coupled dynamics in an ion-gating reservoir</article-title>. <source>Sci. Adv.</source> <volume>8</volume> (<issue>50</issue>), <fpage>eade1156</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.ade1156</pub-id>
</citation>
</ref>
<ref id="B56">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oostwal</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Straat</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Biehl</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation</article-title>. <source>Phys. A</source> <volume>564</volume>, <fpage>125517</fpage>. <pub-id pub-id-type="doi">10.1016/j.physa.2020.125517</pub-id>
</citation>
</ref>
<ref id="B57">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pei</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Deng</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Towards artificial general intelligence with hybrid Tianjic chip architecture</article-title>. <source>Nature</source> <volume>572</volume> (<issue>7767</issue>), <fpage>106</fpage>&#x2013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-1424-8</pub-id>
</citation>
</ref>
<ref id="B58">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pyragas</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Pyragas</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Using reservoir computer to predict and prevent extreme events</article-title>. <source>Phys. Lett. A</source> <volume>384</volume> (<issue>24</issue>), <fpage>126591</fpage>. <pub-id pub-id-type="doi">10.1016/j.physleta.2020.126591</pub-id>
</citation>
</ref>
<ref id="B59">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rajendran</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Alibart</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Neuromorphic computing based on emerging memory technologies</article-title>. <source>IEEE J. Emerg. Sel. Top. Circuits Syst.</source> <volume>6</volume> (<issue>2</issue>), <fpage>198</fpage>&#x2013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1109/JETCAS.2016.2533298</pub-id>
</citation>
</ref>
<ref id="B60">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Romeira</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Av&#xf3;</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Figueiredo</surname>
<given-names>J. M. L.</given-names>
</name>
<name>
<surname>Barland</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Javaloyes</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Regenerative memory in time-delayed neuromorphic photonic resonators</article-title>. <source>Sci. Rep.</source> <volume>6</volume>, <fpage>19510</fpage>. <pub-id pub-id-type="doi">10.1038/srep19510</pub-id>
</citation>
</ref>
<ref id="B61">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Roy</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sharad</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yogendra</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Brain-inspired computing with spin torque devices</article-title>,&#x201d; in <conf-name>2014 Design, Automation &#x26; Test in Europe Conference &#x26; Exhibition (DATE)</conf-name>, <conf-loc>Dresden, Germany</conf-loc>, <conf-date>24-28 March 2014</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.7873/DATE.2014.245</pub-id>
</citation>
</ref>
<ref id="B62">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Schemmel</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Br&#xfc;derle</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gr&#xfc;bl</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hock</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Meier</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Millner</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>A wafer-scale neuromorphic hardware system for large-scale neural modeling</article-title>,&#x201d; in <conf-name>2010 IEEE International Symposium on Circuits and Systems (ISCAS)</conf-name>, <conf-loc>Paris, France</conf-loc>, <conf-date>30 May - 02 June 2010</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1947</fpage>&#x2013;<lpage>1950</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2010.5536970</pub-id>
</citation>
</ref>
<ref id="B63">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schuman</surname>
<given-names>C. D.</given-names>
</name>
<name>
<surname>Kulkarni</surname>
<given-names>S. R.</given-names>
</name>
<name>
<surname>Parsa</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Parker Mitchell</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Date</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Kay</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Opportunities for neuromorphic computing algorithms and applications</article-title>. <source>Nat. Comput. Sci.</source> <volume>2</volume> (<issue>1</issue>), <fpage>10</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1038/s43588-021-00184-y</pub-id>
</citation>
</ref>
<ref id="B64">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Schuman</surname>
<given-names>C. D.</given-names>
</name>
<name>
<surname>Potok</surname>
<given-names>T. E.</given-names>
</name>
<name>
<surname>Patton</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Birdwell</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Dean</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Rose</surname>
<given-names>G. S.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <source>A survey of neuromorphic computing and neural networks in hardware</source>. <comment>arXiv</comment>. <pub-id pub-id-type="doi">10.48550/arXiv.1705.06963</pub-id>
</citation>
</ref>
<ref id="B65">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sengupta</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Panda</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Raghunathan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2016b</year>). &#x201c;<article-title>Neuromorphic computing enabled by spin-transfer torque devices</article-title>,&#x201d; in <conf-name>2016 29th International Conference on VLSI Design and 2016 15th International Conference on Embedded Systems (VLSID)</conf-name>, <conf-loc>Kolkata, India</conf-loc>, <conf-date>04-08 January 2016</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>32</fpage>&#x2013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1109/VLSID.2016.117</pub-id>
</citation>
</ref>
<ref id="B66">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sengupta</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Yogendra</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2016a</year>). &#x201c;<article-title>Spintronic devices for ultra-low power neuromorphic computation (Special session paper)</article-title>,&#x201d; in <conf-name>2016 IEEE International Symposium on Circuits and Systems (ISCAS)</conf-name>, <conf-loc>Montreal, QC, Canada</conf-loc>, <conf-date>22-25 May 2016</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>922</fpage>&#x2013;<lpage>925</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2016.7527392</pub-id>
</citation>
</ref>
<ref id="B67">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Serb</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Corna</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>George</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Khiat</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rocchi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Reato</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Memristive synapses connect brain and silicon spiking neurons</article-title>. <source>Sci. Rep.</source> <volume>10</volume> (<issue>2590</issue>), <fpage>2590</fpage>&#x2013;<lpage>2597</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-58831-9</pub-id>
</citation>
</ref>
<ref id="B68">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shainline</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Buckley</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Mirin</surname>
<given-names>R. P.</given-names>
</name>
<name>
<surname>Nam</surname>
<given-names>S. W.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Superconducting optoelectronic circuits for neuromorphic computing</article-title>. <source>Phys. Rev. Appl.</source> <volume>7</volume> (<issue>3</issue>), <fpage>034013</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevApplied.7.034013</pub-id>
</citation>
</ref>
<ref id="B69">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shastri</surname>
<given-names>Bhavin J.</given-names>
</name>
<name>
<surname>Tait</surname>
<given-names>Alexander N.</given-names>
</name>
<name>
<surname>Ferreira de Lima</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>WolframPernice</surname>
<given-names>H. P.</given-names>
</name>
<name>
<surname>Bhaskaran</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wright</surname>
<given-names>C. D.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>Photonics for artificial intelligence and neuromorphic computing</article-title>. <source>Nat. Photonics</source> <volume>15</volume> (<issue>2</issue>), <fpage>102</fpage>&#x2013;<lpage>114</lpage>. <pub-id pub-id-type="doi">10.1038/s41566-020-00754-y</pub-id>
</citation>
</ref>
<ref id="B70">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Siddiqui</surname>
<given-names>S. A.</given-names>
</name>
<name>
<surname>Dutta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ross</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Baldo</surname>
<given-names>M. A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Magnetic domain wall based synaptic and activation function generator for neuromorphic accelerators</article-title>. <source>Nano Lett.</source> <volume>20</volume> (<issue>2</issue>), <fpage>1033</fpage>&#x2013;<lpage>1040</lpage>. <pub-id pub-id-type="doi">10.1021/acs.nanolett.9b04200</pub-id>
</citation>
</ref>
<ref id="B71">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Jeong</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Pan</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Xia</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cha</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Skyrmion-based artificial synapses for neuromorphic computing</article-title>. <source>Nat. Electron.</source> <volume>3</volume> (<issue>3</issue>), <fpage>148</fpage>&#x2013;<lpage>155</lpage>. <pub-id pub-id-type="doi">10.1038/s41928-020-0385-0</pub-id>
</citation>
</ref>
<ref id="B72">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Squire</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Berg</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Bloom</surname>
<given-names>F. E.</given-names>
</name>
<name>
<surname>Du Lac</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spitzer</surname>
<given-names>N. C.</given-names>
</name>
</person-group> (<year>2012</year>). <source>Fundamental neuroscience</source>. <publisher-loc>Massachusetts, US</publisher-loc>: <publisher-name>Academic Press</publisher-name>.</citation>
</ref>
<ref id="B73">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Suri</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Parmar</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Querlioz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Alibart</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Neuromorphic hybrid RRAM-CMOS RBM architecture</article-title>,&#x201d; in <conf-name>2015 15th Non-Volatile Memory Technology Symposium (NVMTS)</conf-name>, <conf-loc>Beijing, China</conf-loc>, <conf-date>12-14 October 2015</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/NVMTS.2015.7457484</pub-id>
</citation>
</ref>
<ref id="B74">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Szanda&#x142;a</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Review and comparison of commonly used activation functions for deep neural networks</article-title>,&#x201d; in <source>Bio-inspired neurocomputing</source> (<publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>203</fpage>&#x2013;<lpage>224</lpage>. <pub-id pub-id-type="doi">10.1007/978-981-15-5495-7_11</pub-id>
</citation>
</ref>
<ref id="B75">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanaka</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Yamane</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>H&#xe9;roux</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Nakane</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kanazawa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Takeda</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>Recent advances in physical reservoir computing: A review</article-title>. <source>Neural Netw.</source> <volume>115</volume>, <fpage>100</fpage>&#x2013;<lpage>123</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2019.03.005</pub-id>
</citation>
</ref>
<ref id="B76">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Torrejon</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Riou</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Abreu Araujo</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Tsunegi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Khalsa</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Querlioz</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <article-title>Neuromorphic computing with nanoscale spintronic oscillators</article-title>. <source>Nature</source> <volume>547</volume> (<issue>7664</issue>), <fpage>428</fpage>&#x2013;<lpage>431</lpage>. <pub-id pub-id-type="doi">10.1038/nature23011</pub-id>
</citation>
</ref>
<ref id="B77">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Triefenbach</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jalalvand</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schrauwen</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Martens</surname>
<given-names>J. P.</given-names>
</name>
</person-group> (<year>2010</year>). &#x201c;<article-title>Phoneme recognition with large hierarchical reservoirs</article-title>,&#x201d; in <source>Advances in neural information processing systems</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Lafferty</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Shawe-Taylor</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zemel</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Culotta</surname>
<given-names>A.</given-names>
</name>
</person-group> (<publisher-loc>Red Hook, NY</publisher-loc>: <publisher-name>Curran Associates, Inc.</publisher-name>).</citation>
</ref>
<ref id="B78">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Upadhyay</surname>
<given-names>N. K.</given-names>
</name>
<name>
<surname>Joshi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>J. J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Synaptic electronics and neuromorphic computing</article-title>. <source>Sci. China Inf. Sci.</source> <volume>59</volume> (<issue>6</issue>), <fpage>061404</fpage>. <pub-id pub-id-type="doi">10.1007/s11432-016-5565-1</pub-id>
</citation>
</ref>
<ref id="B79">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Verstraeten</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Schrauwen</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>D&#x2019;Haene</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stroobandt</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>An experimental unification of reservoir computing methods</article-title>. <source>Neural Netw.</source> <volume>20</volume> (<issue>3</issue>), <fpage>391</fpage>&#x2013;<lpage>403</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2007.04.003</pub-id>
</citation>
</ref>
<ref id="B80">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vincent</surname>
<given-names>A. F.</given-names>
</name>
<name>
<surname>Larroque</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Locatelli</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ben Romdhane</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Bichler</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Gamrat</surname>
<given-names>C.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). <article-title>Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems</article-title>. <source>IEEE Trans. Biomed. Circuits Syst.</source> <volume>9</volume> (<issue>2</issue>), <fpage>166</fpage>&#x2013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1109/TBCAS.2015.2414423</pub-id>
</citation>
</ref>
<ref id="B81">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Thakur</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>van Schaik</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>An FPGA-based massively parallel neuromorphic cortex simulator</article-title>. <source>Front. Neurosci.</source> <volume>12</volume>, <fpage>213</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00213</pub-id>
</citation>
</ref>
<ref id="B82">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Strukov</surname>
<given-names>D. B.</given-names>
</name>
<name>
<surname>Stewart</surname>
<given-names>D. R.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Memristive devices for computing</article-title>. <source>Nat. Nanotechnol.</source> <volume>8</volume> (<issue>1</issue>), <fpage>13</fpage>&#x2013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1038/nnano.2012.240</pub-id>
</citation>
</ref>
<ref id="B83">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yao</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Tang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>W.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Fully hardware-implemented memristor convolutional neural network</article-title>. <source>Nature</source> <volume>577</volume> (<issue>7792</issue>), <fpage>641</fpage>&#x2013;<lpage>646</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-020-1942-4</pub-id>
</citation>
</ref>
<ref id="B84">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Woodland</surname>
<given-names>P. C.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Parameterised sigmoid and relu hidden activation functions for dnn acoustic modelling</article-title>,&#x201d; in <conf-name>Interspeech</conf-name>, <conf-loc>Dresden, Germany</conf-loc>, <conf-date>September 6-10, 2015</conf-date>.</citation>
</ref>
</ref-list>
</back>
</article>