<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurorobot.</journal-id>
<journal-title>Frontiers in Neurorobotics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurorobot.</abbrev-journal-title>
<issn pub-type="epub">1662-5218</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnbot.2022.1041108</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>An overview of brain-like computing: Architecture, applications, and future trends</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Ou</surname> <given-names>Wei</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1627958/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Xiao</surname> <given-names>Shitao</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1994825/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zhu</surname> <given-names>Chengyu</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Han</surname> <given-names>Wenbao</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1627959/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zhang</surname> <given-names>Qionglu</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2094951/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>The School of Cyberspace Security, Hainan University</institution>, <addr-line>Hainan</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Henan Key Laboratory of Network Cryptography Technology</institution>, <addr-line>Zhengzhou</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>The School of Computer Science and Technology</institution>, <addr-line>Hainan</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Wellington Pinheiro dos Santos, Federal University of Pernambuco, Brazil</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Juliana Gomes, Federal University of Pernambuco, Brazil; Maira Santana, Universidade de Pernambuco, Brazil</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Shitao Xiao <email>20191620310148&#x00040;hainanu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>1041108</elocation-id>
<history>
<date date-type="received">
<day>10</day>
<month>09</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>10</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Ou, Xiao, Zhu, Han and Zhang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Ou, Xiao, Zhu, Han and Zhang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>With the development of technology, Moore&#x00027;s law will come to an end, and scientists are trying to find a new way out in brain-like computing. But we still know very little about how the brain works. At the present stage of research, brain-like models are all structured to mimic the brain in order to achieve some of the brain&#x00027;s functions, and then continue to improve the theories and models. This article summarizes the important progress and status of brain-like computing, summarizes the generally accepted and feasible brain-like computing models, introduces, analyzes, and compares the more mature brain-like computing chips, outlines the attempts and challenges of brain-like computing applications at this stage, and looks forward to the future development of brain-like computing. It is hoped that the summarized results will help relevant researchers and practitioners to quickly grasp the research progress in the field of brain-like computing and acquire the application methods and related knowledge in this field.</p></abstract>
<kwd-group>
<kwd>brain-like computing</kwd>
<kwd>neuronal models</kwd>
<kwd>spiking neuron networks</kwd>
<kwd>spiking neural learning</kwd>
<kwd>learning algorithms</kwd>
<kwd>neuromorphic chips</kwd>
</kwd-group>
<counts>
<fig-count count="12"/>
<table-count count="2"/>
<equation-count count="13"/>
<ref-count count="141"/>
<page-count count="22"/>
<word-count count="11871"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Achieving artificial intelligence as the major goal of mankind has been at the top of the heated debate. Since the Dartmouth Conference in 1956 (McCarthy et al., <xref ref-type="bibr" rid="B87">2006</xref>), the development of AI has gone through three waves. They can be roughly divided into four basic ideas: symbolism, connectionism, behaviorism, and statism. These different ideas have captured some of the characteristics of &#x0201C;intelligence&#x0201D; in different aspects, but only partially surpassed the brain of humans in the aspect of function. In recent years, the computer hardware base has become more perfect, and deep learning has revealed its huge potential (Huang Y. et al., <xref ref-type="bibr" rid="B67">2022</xref>; Yang et al., <xref ref-type="bibr" rid="B131">2022</xref>). In 2016, AlphaGo defeated Lee Sedol, the ninth-degree Go master, which marked that the third wave of artificial intelligence technology revolution has reached its peak.</p>
<p>In particular, the realization of AI has become one of the wrestling points of national power competition. In 2017, China released and implemented a new generation of artificial intelligence development planning. In June 2019, the United States released the latest version of the <italic>National Artificial Intelligence Research and Development Strategic Plan</italic> (Amundson et al., <xref ref-type="bibr" rid="B9">1911</xref>). Europe has also identified AI as a priority development project: in 2016, the European Commission proposed a legislative motion on AI; in 2018, the European Commission submitted the <italic>European Artificial Intelligence</italic> (Delponte and Tamburrini, <xref ref-type="bibr" rid="B35">2018</xref>), and published <italic>Coordinated Plan on Artificial Intelligence</italic> with the theme &#x0201C;Made in Europe with Artificial Intelligence.&#x0201D;</p>
<p>Achieving artificial intelligence requires more powerful information processing capabilities, but relying on the current classical computer architecture cannot meet the huge amount of data computing. The classical computer system has encountered two major bottlenecks in its development: the storage wall effect due to von Neumann structure and Moore&#x00027;s law will fail in the next few years. On the one hand, traditional processor architecture is inefficient and energy intensive. When dealing with intelligent problems in real-time, it is impossible to construct suitable algorithms for processing unstructured information. In addition, the mismatch between the rate of programs or data transferred back and forth and the rate of the central processor processing information leads to a storage wall effect. On the other hand, as the chip&#x00027;s size assembly gets closer to the size of a single atom, the devices are getting closer to the limits of their respective physical miniaturization. So, the cost of performance enhancement will become higher and the technical implementation will become more difficult. Therefore, researchers put their hopes on brain-like computing in order to break through the current technical bottleneck.</p>
<p>Early research in brain-like computing followed the traditional computer manufacturing process that we first recognize how the human brain works and develop a neuromorphic computer based on the theory. But after more than a decade of research, mankind is almost standing still in the field of brain science. So, the path of theory before technology was abandoned by mainstream brain-like research. Looking back at human development, we see that many technologies precede theories. For example, in the case of airplanes, we can build the physical object before conducting research to refine the theory. Based on it, researchers adopted structural brain analogs: using existing brain science knowledge and technology to simulate the structure of the human brain, and then refining the theory after success.</p>
<p>This article first introduces the idea behind the research significance of brain-like computing in a general way. Then we summarize the research history and compare the current research progress with analysis and outlook. The article structure is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>The structure of the article is as follows the analysis of relevant models, the establishment of related platforms, implementation of related applications, challenges, and prospects.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0001.tif"/>
</fig>
</sec>
<sec id="s2">
<title>Progress in brain-like computing</title>
<p>Brain-like computers use spiking neural networks (SNNs) instead of the von Neumann architecture of classical computers and use micro and nano-optoelectronic devices to simulate the characteristics of information processing of biological neurons and synapses (Huang, <xref ref-type="bibr" rid="B66">2016</xref>). Brain-like computers are not a new idea, in 1943, before the invention of the computer, Turing and Shannon had a debate about the imaginary &#x0201C;computer&#x0201D; (Hodges and Turing, <xref ref-type="bibr" rid="B64">1992</xref>). In 1950, Turing mentioned it in Computers and Intelligence (Neuman, <xref ref-type="bibr" rid="B96">1958</xref>). In 1958, Von Neumann also discusses neurons, neural impulses, neural networks, and information processing mechanisms of the brain of humans in the Computers and the Human Brain (Yon Neumann, <xref ref-type="bibr" rid="B136">1958</xref>). However, due to the limitations of various technologies at that time and the ideal future described by Moore&#x00027;s theorem, brain-like computing did not receive enough attention. Around 2005, it was generally believed that Moore&#x00027;s law would come to an end around 2020. Researchers began to shift their focus to brain-like computing. Then, the brain-like computing officially entered an accelerated period of development.</p>
<p>A summary of the evolution of brain-like computing (Mead, <xref ref-type="bibr" rid="B89">1989</xref>; Gu and Pan, <xref ref-type="bibr" rid="B57">2015</xref>; Andreopoulos et al., <xref ref-type="bibr" rid="B11">2018</xref>; Boybat et al., <xref ref-type="bibr" rid="B19">2018</xref>; Gleeson et al., <xref ref-type="bibr" rid="B53">2019</xref>) is shown in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Brain-like computing has evolved from conceptual advancement to technical hibernation to accelerated development due to the possible end of Moore&#x00027;s law.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0002.tif"/>
</fig>
</sec>
<sec id="s3">
<title>Brain-like computing models</title>
<p>There are three main aspects of brain-like computing: simulation of neurons, information encoding of neural systems, and learning algorithms of neural networks.</p>
<sec>
<title>Neuron model</title>
<p>Neurons are the basic structural and functional units of the human brain nervous system. The most commonly used models in the SNN network construction are the Hodgkin&#x02013;Huxley (HH) model (Burkitt, <xref ref-type="bibr" rid="B21">2006</xref>), integrate-and-fire (IF) model (Abbott, <xref ref-type="bibr" rid="B3">1999</xref>; Burkitt, <xref ref-type="bibr" rid="B21">2006</xref>), leaky integrate-and-fire (LIF) model (Gerstner and Kistler, <xref ref-type="bibr" rid="B49">2002</xref>), Izhikevich model (Izhikevich, <xref ref-type="bibr" rid="B68">2003</xref>; Valadez-God&#x000ED;nez et al., <xref ref-type="bibr" rid="B124">2020</xref>), and AdExIF model (Brette and Gerstner, <xref ref-type="bibr" rid="B20">2005</xref>), and so on.</p>
<list list-type="simple">
<list-item><p>1) HH model</p></list-item>
</list>
<p>The HH model is closest to biological reality in the description of neuronal features and is widely used in the field of computational neuroscience. It can simulate many neuronal functions, like activation, inactivation, action potentials, and ion channels. The HH model describes the neuronal electrical activity in terms of ionic activity. The cell membrane contains sodium, potassium, and leaky channels. Each ion channel has different gating proteins. It can restrict the passage of ions, so the permeability of each kind of ions is different in the membrane. Because of this, neurons have abundant electrical activity. At a mathematical level, the binding effect of gating proteins is equivalent to ion channel conductance. The conductance of the ion channel, as a dependent variable, varies with the variables of activation and deactivation of the ion channel. The current of the ion channel is determined by the conductance of ion channel, the reversal potential of the ion channel, and the membrane potential. And the total current consists of the leakage, sodium, potassium current, and the current due to membrane potential changes. Therefore, the HH model also equates the cell membrane to a circuit diagram.</p>
<list list-type="simple">
<list-item><p>2) IF and LIF models</p></list-item>
</list>
<p>In 1907, the integrate-and-fire neuron model was proposed by Lapicque (<xref ref-type="bibr" rid="B76">1907</xref>). According to the variation of neuronal membrane potential with time in the model, it can be divided into the IF model and the LIF model. The IF model describes the membrane potential of neurons with input current, as shown in Equation 1:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>C</italic><sub><italic>m</italic></sub> represents the neuronal membrane capacitance, which determines the rate of change of the membrane potential. <italic>I</italic> represents the neuronal input current. The model is called the leak-free IF model because the neuronal membrane potential is only correlated with the input current. When the current input zero, the membrane potential remains unchanged. Its discrete form is shown in Equation 2:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>V</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>V</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mo>&#x00394;</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x00394;<italic>t</italic> is the step length of discrete sampling.</p>
<p>In contrast, the LIF model adds the simulation of neuron voltage leakage. When there is no current input for a certain period of time, the membrane voltage will gradually leak to resting potential, as shown in Equation 3 (citing Equation 1):</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>V</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>g</italic><sub><italic>leak</italic></sub> is the leaky conductance of the neuron. <italic>E</italic><sub><italic>rest</italic></sub> is the resting potential of the neuron. Neuroscience-related studies have shown that the binding of neurotransmitters to receptors in the postsynaptic membrane primarily affects the electrical conductance of postsynaptic neurons, thereby altering the neuronal membrane potential. So, it is more biologically reasonable to expand the input current <italic>I</italic> in Equation 1 into excitatory and inhibitory currents described by conductance. However, both neurons change to resting potentials directly after activation unable to retain the previous spike.</p>
<list list-type="simple">
<list-item><p>3) Izhikevich model</p></list-item>
</list>
<p>In 2003, researcher Eugene M. lzhikevich proposed the lzhikevich model from the perspective of nonlinear dynamical systems (Izhikevich, <xref ref-type="bibr" rid="B69">2004</xref>). It can present the firing behavior of a variety of biological neurons with an arithmetic complexity close to that of the LIF model, as shown in Equation 4:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>04</mml:mn><mml:msup><mml:mrow><mml:mi>V</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mn>5</mml:mn><mml:mi>V</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>140</mml:mn><mml:mo>-</mml:mo><mml:mi>U</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>U</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>b</mml:mi><mml:mi>V</mml:mi><mml:mo>-</mml:mo><mml:mi>U</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mtext>if&#x000A0;</mml:mtext><mml:mi>V</mml:mi><mml:mo>&#x02265;</mml:mo><mml:mn>30</mml:mn><mml:mi>m</mml:mi><mml:mi>V</mml:mi><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mi>v</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>c</mml:mi></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi>U</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>U</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>d</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In Equations 5 and 6, <italic>U</italic> is an auxiliary variable, adjusted for the parameters <italic>a</italic>, <italic>b</italic>, <italic>c</italic>, and <italic>d</italic>, the lzhikevich model can exhibit a discharge behavior similar to the HH model. But unlike the HH model in which each parameter has a clear physiological meaning (e.g., ion channels, etc.), these parameters no longer have the corresponding properties.</p>
<list list-type="simple">
<list-item><p>4) AdEx IF model</p></list-item>
</list>
<p>The AdEx IF model is a modification of the lzhikevich model. However, the AdEx IF model reduces the response speed of the membrane voltage. This results in a gradual decrease in the frequency of pulse delivery from neurons under constant voltage stimulation conditions. We can think of this as a slowing down of the response of neurons that gradually gets &#x0201C;tired&#x0201D; after sending impulses. It is an essential feature of the AdEx IF model that is closer to the HH model in terms of firing behavior results.</p>
<p>The comparison of the above five neuronal models is summarized in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Comparison of neuronal models.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Neuron models</bold></th>
<th valign="top" align="left"><bold>Circuit forms</bold></th>
<th valign="top" align="left"><bold>Advantages</bold></th>
<th valign="top" align="left"><bold>Defects</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">HH</td>
<td valign="top" align="left">Capacitor resistor circuit</td>
<td valign="top" align="left">Close to biological neurons, high accuracy</td>
<td valign="top" align="left">Complex expression, complicated operation</td>
</tr>
<tr>
<td valign="top" align="left">LF</td>
<td valign="top" align="left">Capacitance</td>
<td valign="top" align="left">Simple operation</td>
<td valign="top" align="left">Simple model with memory effect</td>
</tr>
<tr>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">Capacitor resistor circuit</td>
<td valign="top" align="left">Simulate resting state, simple operation</td>
<td valign="top" align="left">The model is simple and ignores many neurodynamic properties</td>
</tr>
<tr>
<td valign="top" align="left">Izhikevich</td>
<td valign="top" align="left"><bold>\</bold></td>
<td valign="top" align="left">Simulate multiple discharge modes</td>
<td valign="top" align="left">Low computational efficiency</td>
</tr>
<tr>
<td valign="top" align="left">AdEx lF</td>
<td valign="top" align="left"><bold>\</bold></td>
<td valign="top" align="left">Simulate multiple discharge modes</td>
<td valign="top" align="left">Reduced pulse firing frequency under constant voltage stimulation</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Neural system information encoding</title>
<p>Neural information encoding consists of two processes: feature extraction and spike sequence generation. In terms of feature extraction, there is no mature theory or algorithm. In terms of spike sequence generation, there are two approaches commonly used by researchers: rate coding (Butts et al., <xref ref-type="bibr" rid="B22">2007</xref>; Panzeri et al., <xref ref-type="bibr" rid="B98">2010</xref>) and temporal coding. Rate coding uses the frequency of spike to express all the information of spike sequences, which cannot effectively describe the fast time-varying perceptual information. Unlike average-rate coding, temporal coding takes into account that precisely timed spike carries valid information. Thus, temporal coding can describe neuronal activity more accurately. Precise spike timing plays an important role in the processing of visual, auditory, and other perceptual information.</p>
<list list-type="simple">
<list-item><p>1) Rate coding</p></list-item>
</list>
<p>Rate coding primarily utilizes a stochastic process approach to generate a spike sequences. The response function of a neuron suitable for Poisson coding is consist of a series of spike functions as shown in Equation 7:</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>k</italic> is the number of spikes in a given spike sequence, <italic>t</italic> represents the arrival time of each spike, and <italic>t</italic><sub><italic>i</italic></sub> denotes the time at which each spike occurs. The unit spike signal is defined as shown in Equation 8:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M8"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003B4;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The integral is in the form of <inline-formula><mml:math id="M9"><mml:msubsup><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow></mml:msubsup><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>. The time of neural action potential response is equivalent to the spike release time in a spike sequence. From the pulse function property, the number of pulses within <italic>t</italic><sub>1</sub> to <italic>t</italic><sub>2</sub> can be calculated by <inline-formula><mml:math id="M10"><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo>&#x0222B;</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:msubsup><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:math></inline-formula>. Thus, the instantaneous discharge frequency can be defined as the expectation of the neuronal response function. According to the statistical theory of probability, the mean value of the neuronal response function in a short time interval is used as an estimate of the discharge frequency (Koutrouvelis and Canavos, <xref ref-type="bibr" rid="B75">1999</xref>; Adam, <xref ref-type="bibr" rid="B5">2014</xref>; Safiullina, <xref ref-type="bibr" rid="B106">2016</xref>; Shanker, <xref ref-type="bibr" rid="B114">2017</xref>; Allo and Otok, <xref ref-type="bibr" rid="B8">2019</xref>) as shown in Equation 9:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>r</italic><sub><italic>M</italic></sub>(<italic>t</italic>) is the number of spikes in the entire time window and &#x003C1;<sub><italic>j</italic></sub>(<italic>t</italic>) is the number of spike responses per neuron. Neither <italic>r</italic><sub><italic>M</italic></sub>(<italic>t</italic>) nor &#x003C1;<sub><italic>j</italic></sub>(<italic>t</italic>) is a continuous function and only under the condition of infinite time window, a smooth function can be obtained. The rules of encoding are crucial for the mapping between values and spike.</p>
<list list-type="simple">
<list-item><p>2) Temporal coding</p></list-item>
</list>
<p>The time-to-first-spike mechanism is generally used in time encoding as the moment of spike issuance, as shown in Equations 10 and 11:</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>T</mml:mi><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mo class="qopname">max</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mi>I</mml:mi><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E11"><label>(11)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>f</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd columnalign='left'><mml:mrow><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p><italic>I</italic> represents the actual intensity of each image pixel represented in the field of pattern recognition. <italic>I</italic><sub>max</sub> represents the maximum value of each pixel intensity. <inline-formula><mml:math id="M14"><mml:mfrac><mml:mrow><mml:mi>T</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mo class="qopname">max</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mi>I</mml:mi></mml:math></inline-formula> is a time window with a temporal pattern to ensure the pixel intensity value can be converted. <italic>T</italic><sub><italic>s</italic></sub> is the exact moment of the emitted spike, and a spike sequence will only emit one spike in the time-to-first-spike mechanism.</p>
<list list-type="simple">
<list-item><p>3) Population coding</p></list-item>
</list>
<p>Population coding (Leutgeb et al., <xref ref-type="bibr" rid="B77">2005</xref>; Samonds et al., <xref ref-type="bibr" rid="B107">2006</xref>) is a method of representing a stimulus using the joint activity of multiple neurons. Gaussian population coding is the most widely used model for group-skewed coding. In the actual encoding of the SNN, the pixel intensity is set to a real value that is determined by a set of overlapping Gaussian receptive field neurons. The larger the pixel intensity, the larger the value, the shorter the encoding time, and the easier it is for the Gaussian receptive field neurons near the front to generate a spike and form a spike sequence. Let <italic>k</italic> Gaussian receptive field neurons be encoded then, the centers and widths of <italic>k</italic> Gaussian functions are shown in Equation 12:</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M15"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mfrac><mml:mrow><mml:mo class="qopname">max</mml:mo><mml:mo>-</mml:mo><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>-</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>&#x025AA;</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mn>5</mml:mn><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E13"><mml:math id="M16"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003C3;</mml:mi><mml:mo>=</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo>&#x025AA;</mml:mo><mml:mfrac><mml:mrow><mml:mo class="qopname">max</mml:mo><mml:mo>-</mml:mo><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>-</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mn>5</mml:mn><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This type of encoding is used to encode continuous variables. For example, the population coding method ensures higher accuracy and realism compared to the first two coding methods for coding sound frequencies and joint positions. Due to the characteristics of this encoding method, it can significantly reduce the number of neurons required for the same accuracy. In order to improve the effectiveness of information encoding, the researchers are also trying to introduce different mechanisms in the encoding process (Dennis et al., <xref ref-type="bibr" rid="B36">2013</xref>; Yu et al., <xref ref-type="bibr" rid="B137">2013</xref>).</p>
</sec>
<sec>
<title>Spiking neural networks and learning algorithms</title>
<p>Over the past decades, researchers have drawn inspiration from biological experimental phenomena and findings to explore the theory of synaptic plasticity. Bi and Pope proposed the spike-timing-dependent plasticity (STDP) mechanism and extended it to different spike learning mechanisms (Bi and Poo, <xref ref-type="bibr" rid="B15">1999</xref>; Gjorgjieva et al., <xref ref-type="bibr" rid="B52">2011</xref>), which order of firing, adjusting the strength of neuronal connections.</p>
<p>To solve the supervised learning problem of SNNs, researchers have combined the STDP mechanism with other weight adjustment methods. This mainly contains the gradient descent and Widrow-Hoff rules. Based on gradient descent rules (Shi et al., <xref ref-type="bibr" rid="B115">1996</xref>), Gutig et al. put forward a Tempotron learning algorithm (G&#x000FC;tig and Sompolinsky, <xref ref-type="bibr" rid="B59">2006</xref>). The algorithm updates the synaptic weights according to the combined effect of the pre-synaptic and post-synaptic pulse time difference and the error signal. Ponulak et al. proposed the ReSuMe learning method (Florian, <xref ref-type="bibr" rid="B44">2012</xref>) avoiding the gradient descent algorithm in the gradient solving problem. The SPAN algorithm was proposed in ref. (Mohemmed et al., <xref ref-type="bibr" rid="B93">2013</xref>). The algorithm is similar to ReSuMe, except that it uses a spike convolution transform to convert spikes into analog values before performing operations, which is computationally intensive and can only be learned offline. Based on the gradient descent, an E-Leaning rule is given by the Chronotron algorithm (Victor and Purpura, <xref ref-type="bibr" rid="B126">1997</xref>; van Rossum, <xref ref-type="bibr" rid="B125">2001</xref>). It adjusts the synapse by minimizing an error function that is defined by the difference in the pulse sequence of the target and actual output. In a comparison of the single-spike output of neurons (e.g., Tempotron) and multi-spike output (e.g., ReSuMe), it was found that the multi-spike output of neurons can greatly improve classification accuracy and learning capacity (Gardner and Gruning, <xref ref-type="bibr" rid="B46">2014</xref>; Giitig, <xref ref-type="bibr" rid="B51">2014</xref>). Therefore, the use of neurons with multi-spike input&#x02013;output mapping as computational units is the basis for designing efficient and large learning capacity SNNs. Although multi-spike input&#x02013;output mapping can be implemented, it is only applicable to single-layer SNNs. In the literature (Ghosh-Dastidar and Adeli, <xref ref-type="bibr" rid="B50">2009</xref>; McKennoch et al., <xref ref-type="bibr" rid="B88">2009</xref>; Sporea and Gruning, <xref ref-type="bibr" rid="B119">2013</xref>; Xu et al., <xref ref-type="bibr" rid="B130">2013</xref>), researchers have tried to study algorithms applicable to multilayer SNNs. However, the algorithms for multilayer SNNs are still limited by the current algorithms, and the research on multilayer SNNs is still in its initial stage.</p>
<p>Since the training algorithm for SNNs is less mature, some researchers have proposed algorithms to convert traditional ANNs into SNNs. A deep ANN-based neural network is trained by a comparable mature ANN training algorithm, then, transformed into an SNN by firing rate encoding (Diehl et al., <xref ref-type="bibr" rid="B37">2015</xref>), thus avoiding the difficulty of training SNNs directly. Based on this conversion mechanism, HRL Labs researchers (Cao et al., <xref ref-type="bibr" rid="B23">2014</xref>) converted a Convolutional Neural Network (CNN) (Liu X. et al., <xref ref-type="bibr" rid="B80">2021</xref>) to a Spiking CNN with recognition accuracy close to that of a CNN on the commonly used object recognition test set Neovision2 with CIFAR-10. There is another SNN architecture called liquid state machine (LSM) (Maass et al., <xref ref-type="bibr" rid="B82">2002</xref>), which can also avoid direct training of SNNs. As long as the SNN is large enough, it can theoretically achieve any complex input classification task. Since LSMs are regression neural networks, this confers on them the ability to memorize and can effectively handle the analysis of temporal information. New Zealand researcher Nikola Kasabov proposed the NeuCube system (Kasabov et al., <xref ref-type="bibr" rid="B71">2016</xref>) architecture based on the basic idea of LSM for temporal and spatial information processing. In the training phase, NeuCube uses STDP, a halo-inspired genetic algorithm, etc. to train the SNN. In the operation phase, the parameters of the SNN and the output layer classification algorithm are also dynamically changing, which gives the NeuCube system a strong adaptive capability.</p>
</sec>
</sec>
<sec id="s4">
<title>Brain-like computing chips</title>
<sec>
<title>Darwin chip</title>
<p><xref ref-type="fig" rid="F3">Figure 3</xref> shows the overall microstructure of the Darwin Neural Processing Unit (NPU) (Ma et al., <xref ref-type="bibr" rid="B81">2017</xref>). The Address-event representation (AER) is the format that represents the input and output spike information encoded. AER packet contains the neuronal ID that generates the spike and the timestamp when the spike is generated, which can represent each spike. The NPU, driven by the input AER packets, works in an event-triggered approach works. Spike routers translate spikes into weighted latency information by accessing storage and SDRAM (Stankovic and Milenkovic, <xref ref-type="bibr" rid="B120">2015</xref>; Goossens et al., <xref ref-type="bibr" rid="B54">2016</xref>; Ecco and Ernst, <xref ref-type="bibr" rid="B42">2017</xref>; Li et al., <xref ref-type="bibr" rid="B78">2017</xref>; Garrido and Pirsch, <xref ref-type="bibr" rid="B47">2020</xref>; Benchehida et al., <xref ref-type="bibr" rid="B13">2022</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Overall microarchitecture of the Darwin Neural Processing Unit (NPU) and the process of processing AER packages and outputting them.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0003.tif"/>
</fig>
<p>The execution steps of the AER connection runtime are shown below:</p>
<sec>
<title>Time calibration</title>
<p>The NPU works on an event-driven basis. When the FIFO receives a peak, it sent an AER packet to the NPU. The timestamp of the AER packet will be checked by the NPU. The AER packet will enter the peak routing process, if it matches the current time, or, it will go to the neuron state update process.</p>
</sec>
<sec>
<title>Input spike routing</title>
<p>Each AER packet&#x00027;s input spike consists of the timestamp and the source (presynaptic) neuron ID. It is used to find the target (postsynaptic) neuron ID and synaptic properties, containing weights and delays stored in the off-chip DRAM.</p>
</sec>
<sec>
<title>Synaptic delay management</title>
<p>Each synapse has an independently configurable delay parameter. The parameter defines the delay from the generation of the presynaptic neuronal spikes to the reception of the postsynaptic neuronal spikes. Each entry of the weights and queues has the intermediate result of the weights and is sent to the neuron after a certain delay.</p>
</sec>
<sec>
<title>Neuron state update</title>
<p>Each neuron updates its state. First, the neuron receives the biological neuronal current state being updated from the local state memory. Then, it receives the sum of the weights of the current step from the weights and queue. If an output spike generates, it will be sent to the spike router as an AER packet.</p>
</sec>
<sec>
<title>Internal spike routing</title>
<p>It is similar to the process of input spike routing.</p>
<p>Darwin Chip&#x00027;s NPU is an SNN-based neuromorphic co-processor, while it still is a single-chip system, for now, the standard communication interface defined by the AER format allows expanding to multi-chip distributed systems (Nejad, <xref ref-type="bibr" rid="B95">2020</xref>; Cui et al., <xref ref-type="bibr" rid="B32">2021</xref>; Hao et al., <xref ref-type="bibr" rid="B61">2021</xref>; Ding et al., <xref ref-type="bibr" rid="B38">2022</xref>) with AER bus connections in the future. NPU, as a processing element in a network-on-a-chip (NoC) architecture200, can use the AER format for input and output peaking to scale the SNN&#x00027;s size of the chip to potentially millions of neurons, not just thousands of neurons.</p>
</sec>
</sec>
<sec>
<title>Tianjic chip</title>
<p>The Tianjic team is committed to create a brain-like computing chip that has the advantages of both traditional computer and neuromorphic computation. To this end, the researchers designed the unified functional core (Fcore) of the Tianjic chip (Pei et al., <xref ref-type="bibr" rid="B100">2019</xref>), which consists of four main components as follows: axons, dendrites, soma, and router. <xref ref-type="fig" rid="F4">Figure 4</xref> shows the architecture of the Tianjic chip.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>The unified functional core (Fcore) of the Tianjic chip consists of four main components: axons, dendrites, soma, and router.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0004.tif"/>
</fig>
<sec>
<title>Axon block</title>
<p>The Tianjic team sets a small buffer memory to record the historical spikes in SNN (Yang Z. et al., <xref ref-type="bibr" rid="B134">2020</xref>; Agebure et al., <xref ref-type="bibr" rid="B6">2021</xref>; Liu F. et al., <xref ref-type="bibr" rid="B79">2021</xref>; Syed et al., <xref ref-type="bibr" rid="B121">2021</xref>; Das et al., <xref ref-type="bibr" rid="B34">2022</xref>; Mao et al., <xref ref-type="bibr" rid="B84">2022</xref>) mode. The buffer memory can support reconfigurable peak collection durations and bit access <italic>via</italic> shift operations.</p>
</sec>
<sec>
<title>Dendritic blocks</title>
<p>Membrane potential integration of SNN mode and MACs (multiply-and-accumulate) of ANN mode use the same calculator together in order to reunify the level of abstraction of SNNs and ANNs during processing. In detail, MAC units are used to multiply and accumulate in ANN mode. In SNN mode, there is a bypassing mechanism that can skip multiplication to reduce energy under a time window of length one.</p>
</sec>
<sec>
<title>Soma</title>
<p>In SNN mode, the Soma is reconfigurable in order to have peak resetting, deterministic, potential storage, probabilistic fire, and threshold comparison. In ANN mode, fixed or adaptive leakage of the potential value can reduce the leakage function of the membrane potential.</p>
<p>To transmit information between neurons, a router receives and sends information, which is responsible for the transmission and conversion of information between cell bodies and axons. The Design of Tianjic chip is shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Design of the Tianjic chip and its specific processing flow in ANN and SNN mode.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0005.tif"/>
</fig>
<p>In order to support parallel processing of large networks or multiple networks, the chip equipped with a multi-core architecture (Chai et al., <xref ref-type="bibr" rid="B25">2007</xref>; Chaparro et al., <xref ref-type="bibr" rid="B26">2007</xref>; Yu et al., <xref ref-type="bibr" rid="B138">2014</xref>; Grassia et al., <xref ref-type="bibr" rid="B55">2018</xref>; Kiyoyama et al., <xref ref-type="bibr" rid="B73">2021</xref>; Kimura et al., <xref ref-type="bibr" rid="B72">2022</xref>; Zhenghao et al., <xref ref-type="bibr" rid="B140">2022</xref>) can perform seamless communication at the same time. The FCores of the chip, shown in <xref ref-type="fig" rid="F6">Figure 6</xref>, arrange in a two-dimensional (2D) grid. Reconfigurable routing tables of the routers of FCore have the ability of arbitrary connection topologies.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Arrangement format of Fcores on the Tianjic chip. <bold>(A)</bold> Reconfigurable routing tables of the routers of FCore have the ability of arbitrary connection topologies. <bold>(B)</bold> The arrangement of Fcore on the chip.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0006.tif"/>
</fig>
</sec>
</sec>
<sec>
<title>TrueNorth</title>
<p>IBM started from the level of neuronal composition and principles to build a brain-like computing chip by mimicking the brain structure and performing neural simulation with the help of the spike signal conduction process (Birkhoff, <xref ref-type="bibr" rid="B16">1940</xref>). Starting from neuroscience, the neuromorphic synaptic nuclei are used as the basic building blocks of the entire network (Service, <xref ref-type="bibr" rid="B113">2014</xref>; Wang and Hua, <xref ref-type="bibr" rid="B129">2016</xref>). The designers of TrueNorth consider neurons as the main arithmetic unit. The neuron receives and integrates the &#x0201C;1&#x0201D; or &#x0201C;0&#x0201D; pulse signal and issues instructions based on that signal. Then output the instructions to other neurons through the synapses at the connections between neurons (Abramsky and Tzevelekos, <xref ref-type="bibr" rid="B4">2010</xref>; Russo, <xref ref-type="bibr" rid="B105">2010</xref>). It is shown in <xref ref-type="fig" rid="F7">Figure 7</xref>.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Neuromorphic synaptic apparatus spiking neural network core block.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0007.tif"/>
</fig>
<p>The data transmission is implemented in two stages: first, the transmission data between core blocks are passed along the <italic>x</italic>-direction and then along the <italic>y</italic>-direction until it reaches the target core block. Then, the information is transmitted within the core block, where in the same core block it first passes through presynaptic axons (horizontal lines), cross-aligned synapses (binary junctions), and finally, to the input of the postsynaptic neuron (longitudinal lines).</p>
<p>When a neuron on a nucleus block is excited, it first searches local memory for the axon delay value and the destination address, and then encodes this information into a data packet. If the destination neuron and the source neuron are in the same nucleus block, the local channel in the router is used to transmit the data, otherwise, they will use the channel in the other direction. To prevent the limitation caused by the excessive number of nucleus blocks, a combined decentralized structure is used at the four edges of the network. When leaving the core block, the spike leaving the network is marked with upward (east&#x02013;west direction) and column (north&#x02013;south direction). When entering the core block, the spikes sharing the link input are propagated to the corresponding row or column using the marking information, as shown in <xref ref-type="fig" rid="F8">Figure 8</xref>. The global synchronization clock is 1 Khz, which ensures that the one-to-one hardware and software corresponds exactly to the core block operating in parallel (Hermeline, <xref ref-type="bibr" rid="B63">2007</xref>).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>The propagation of router data between core blocks.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0008.tif"/>
</fig>
</sec>
<sec>
<title>Neurogrid</title>
<p>The Neurogrid chip (Benjamin et al., <xref ref-type="bibr" rid="B14">2014</xref>) consists of axonal, synaptic, dendritic, and cytosolic parts. Neurogrid chips are available in four structures: fully dedicated (FD) (Boahen et al., <xref ref-type="bibr" rid="B18">1989</xref>; Sivilotti et al., <xref ref-type="bibr" rid="B118">1990</xref>), shared axons (SA) (Sivilotti, <xref ref-type="bibr" rid="B117">1991</xref>; Mahowald, <xref ref-type="bibr" rid="B83">1994</xref>; Boahen, <xref ref-type="bibr" rid="B17">2000</xref>), shared synapses (SS) (Hammerstrom, <xref ref-type="bibr" rid="B60">1990</xref>; Yasunaga et al., <xref ref-type="bibr" rid="B135">1990</xref>), and shared dendrites (SD) (Merolla and Boahen, <xref ref-type="bibr" rid="B90">2003</xref>; Choi et al., <xref ref-type="bibr" rid="B31">2005</xref>). The four elements that make up the chip, axon, synapse, dendrite, and soma can be classified according to the architecture and the implementation. As shown in <xref ref-type="fig" rid="F9">Figure 9</xref>, in the analog implementation, a switched current source, a comparator, a wire, and another wire mode are the four elements, respectively (Mead, <xref ref-type="bibr" rid="B89">1989</xref>). A vertical wire (axon) instrumentation charge (synapse) is inserted into a horizontal wire (dendrite), and the charge will be integrated by the capacitance of dendrite. The generated voltage is compared with the threshold by the comparator (cell) and the comparator triggers an output peak if the voltage exceeds a threshold. After that, the capacitor will be discharged (reset) and starts a new cycle.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Analog silicon neurons implementation.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0009.tif"/>
</fig>
<p>In the simplest all-digital implementation, the switching current source is replaced by a bit unit. Functions of axonal and dendritic as word and bit lines are integrated and compared, respectively, digitally: In a loop, a binary 1 is read from the synapse, triggered by the axon, the counter increments (dendrite), and the counter&#x00027;s (cell) output is compared with the threshold digitally stored. During the threshold, the counter will be reset and start a new cycle if a peak is triggered. The process is shown in <xref ref-type="fig" rid="F10">Figure 10</xref>.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Schematic diagram of analog neuron with cycle counting structure.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0010.tif"/>
</fig>
<p>Spikes of a neuron are sent from its array through a transmitter, passed through a router to the parents and two children of its neural core, and passed through a receiver to the receivers. All these digital circuits are event driven and their logic synthesizes vat only when a spike event occurs, following Martin&#x00027;s asynchronous circuit (Martin, <xref ref-type="bibr" rid="B85">1989</xref>; Martin and Nystrom, <xref ref-type="bibr" rid="B86">2006</xref>). The chip has a transmitter and a receiver. The receiver sends multiple peaks to one row and then, the transmitter sends multiple peaks from the row. The address of the common row and the unique column of these peaks will be communicated sequentially. This design facilitates an increase in throughput during communication.</p>
</sec>
<sec>
<title>BrainScaleS-2</title>
<p>The BrainScaleS team released two versions of the BrainScaleS chip design in 2020, and here in this article, we present the BrainScaleS-2 chip (Schemmel et al., <xref ref-type="bibr" rid="B108">2003</xref>; Gr&#x000FC;bl et al., <xref ref-type="bibr" rid="B56">2020</xref>). The architecture of BrainScaleS-2 depends on a tight interaction of analog circuit blocks and digital circuit blocks. Due to the main intended function of the digital processor core, it is referred to as the plasticity processing unit (PPU). The analog core serves as the main neuromorphic component and includes synaptic and neuronal circuits (Aamir et al., <xref ref-type="bibr" rid="B1">2016</xref>, <xref ref-type="bibr" rid="B2">2018</xref>), PPU interfaces, analog parameter memory, and all event-related interface components.</p>
<p>There is a digital plasticity processor in the BSS-2ASIC (Friedmann et al., <xref ref-type="bibr" rid="B45">2016</xref>). This microprocessor, which is specialized in highly parallel single instruction multiple data (SIMD), has an additional layer of the capability of modeling. <xref ref-type="fig" rid="F11">Figure 11</xref> shows the architecture.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>BBS chip structure.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0011.tif"/>
</fig>
<p>The PPU is an embedded microprocessor core with the SIMD units. The unit and simulation core are together optimized for computing plasticity rules (Friedmann et al., <xref ref-type="bibr" rid="B45">2016</xref>). In the current architecture of BSS-2, the same simulation core can be shared by two PPUs. This makes the neuronal circuits to the most efficient arrangement in the center of the simulation core. <xref ref-type="fig" rid="F12">Figure 12</xref> shows the individual functional blocks in the ring core.</p>
<fig id="F12" position="float">
<label>Figure 12</label>
<caption><p>Block diagram of the Analog Network Core (ANNCORE).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnbot-16-1041108-g0012.tif"/>
</fig>
<sec>
<title>Synaptic arrays</title>
<p>In order to make the vertical and horizontal lines that through the subarrays as short as possible, synapses are divided into four blocks of equal size. This reduces their parasitic capacitance (Friedmann et al., <xref ref-type="bibr" rid="B45">2016</xref>; Aamir et al., <xref ref-type="bibr" rid="B2">2018</xref>). Each synaptic array resembles a static memory block and each synapse has 16 memory cells. Two PPUs are connected to the static memory interface of the two adjacent synaptic arrays using a fully parallel connection with 8 x 256 data lines.</p>
</sec>
<sec>
<title>Neuronal compartment circuits</title>
<p>Four rows of neuronal compartment circuits are placed at the synaptic blocks&#x00027; edge. Each pair of dendritic input lines of the neuronal compartment connects with 256 synapses. Neuron chambers implement the AdEx neuron model.</p>
</sec>
<sec>
<title>Analog parameter memories</title>
<p>There is a row of simulation parameters storage between each row of neurons. There are 24 simulated values and another 48 global parameters stored in these capacitive memories for each neuron. These parameters can automatically refresh with the reception of values from the storage block.</p>
</sec>
<sec>
<title>Digital neuron control</title>
<p>The digital neuron control block can be shared by two rows of neurons, which synchronize neural events to a 125-MHz digital system clock and serializes them to the digital output bus.</p>
</sec>
<sec>
<title>Synaptic drives with short-term plasticity</title>
<p>Presynaptic events of the array are input by synaptic drivers. They contain circuits that simulate the simplified Tsodys&#x02013;Markram model (Tsodyks and Markram, <xref ref-type="bibr" rid="B123">1997</xref>; Schemmel et al., <xref ref-type="bibr" rid="B110">2007</xref>) of short-term plasticity. Synaptic drivers can handle single- or multi-valued input signals.</p>
</sec>
<sec>
<title>Random event generator</title>
<p>The random generator is fed directly into the synaptic array <italic>via</italic> the synaptic driver through the synaptic driver, which greatly reduces the use of external bandwidth when using the random model (Pfeil et al., <xref ref-type="bibr" rid="B101">2015</xref>; Jordan et al., <xref ref-type="bibr" rid="B70">2019</xref>).</p>
</sec>
<sec>
<title>Correlation analog to digital converters (ADCs)</title>
<p>SIMD units of PPU arrange the location of the top and bottom edges of the ring core. Analog data from the synaptic array and given analog signals from the neurons are converted into the digital representation required by the PPU by column-parallel ADCs.</p>
<p>The <xref ref-type="table" rid="T2">Table 2</xref> summarizes the above representative domestic and international brain-like computing projects and chips or hardware stations.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Brain-like chip summarization and comparison.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Project &#x00026; organization</bold></th>
<th valign="top" align="left"><bold>Manufacturing process</bold></th>
<th valign="top" align="center"><bold>Number of neurons</bold></th>
<th valign="top" align="center"><bold>Number of synapses</bold></th>
<th valign="top" align="left"><bold>Neuronal models</bold></th>
<th valign="top" align="left"><bold>Learning algorithms</bold></th>
<th valign="top" align="left"><bold>Advantages</bold></th>
<th valign="top" align="left"><bold>Defects</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Darwin, Zhejiang University</td>
<td valign="top" align="left">180 nm COMS</td>
<td valign="top" align="center">2,048</td>
<td valign="top" align="center">4,194,304</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">\</td>
<td valign="top" align="left">Highly configurable</td>
<td valign="top" align="left">Single chip, small scale</td>
</tr>
<tr>
<td valign="top" align="left">Tianjic, Tsinghua University</td>
<td valign="top" align="left">28 nm COMS</td>
<td valign="top" align="center">40,000</td>
<td valign="top" align="center">100,000,000</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">STDP</td>
<td valign="top" align="left">Heterogeneous fusion</td>
<td valign="top" align="left">\</td>
</tr>
<tr>
<td valign="top" align="left">TrueNorth</td>
<td valign="top" align="left">28 nm COMS</td>
<td valign="top" align="center">1,000,000</td>
<td valign="top" align="center">256,000,000</td>
<td valign="top" align="left">LIF</td>
<td valign="top" align="left">\</td>
<td valign="top" align="left">Highly configurable</td>
<td valign="top" align="left">Off-chip learning only</td>
</tr>
<tr>
<td valign="top" align="left">Neurogrid</td>
<td valign="top" align="left">l80 nm COMS</td>
<td valign="top" align="center">1,048,576</td>
<td valign="top" align="center">hundreds of millions</td>
<td valign="top" align="left">QIF</td>
<td valign="top" align="left">\</td>
<td valign="top" align="left">High throughput</td>
<td valign="top" align="left">No plasticity</td>
</tr>
<tr>
<td valign="top" align="left">BrainScaleS-2</td>
<td valign="top" align="left">65 nm COMS</td>
<td valign="top" align="center">196,608</td>
<td valign="top" align="center">50,331,648</td>
<td valign="top" align="left">AdEx IF</td>
<td valign="top" align="left">STDP</td>
<td valign="top" align="left">Mixed plasticity rule</td>
<td valign="top" align="left">Does not demonstrate the ability to handle practical tasks</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
</sec>
<sec id="s5">
<title>Brain-like computing application</title>
<sec>
<title>Brain cognition principle</title>
<p>The main advantages of neuromorphic computing over traditional methods are energy efficiency, speed of execution, robustness to local failures, and learning ability. Currently, neuroscientific knowledge of the human brain is only superficial and the development of neuromorphic computing is not guided by theory. Researchers hope to refine models and theories by using brain-like computing for partial simulations of brain function and structure (Casali et al., <xref ref-type="bibr" rid="B24">2019</xref>; Rizza et al., <xref ref-type="bibr" rid="B104">2021</xref>).</p>
<p>In 2018, Rosanna Migliore et al. (<xref ref-type="bibr" rid="B91">2018</xref>) used a unified data-driven modeling workflow to explore the physiological variability of channel density in hippocampal CA1 pyramidal cells and interneurons. In 2019, Alice Geminiani et al. (<xref ref-type="bibr" rid="B48">2019</xref>) optimized extended generalized leaky integrals and excitation (E-GLIF) neurons. In 2020, Paolo Migliore et al. (<xref ref-type="bibr" rid="B91">2018</xref>) designed new recurrent spiking neural networks (RSNNs) in the brain based on voltage-dependent learning rules. Their model can generate theoretical predictions for experimental validation.</p>
<p>Brain-like computing can help neuroscience understand the human brain more deeply and parse its structure (Amunts et al., <xref ref-type="bibr" rid="B10">2016</xref>; Dobs et al., <xref ref-type="bibr" rid="B39">2022</xref>). After understanding enough about the operation mechanism of the human brain, we can act directly on the human brain to improve thinking ability and solve the currently untreatable brain diseases. What is more, it can make the human intelligence level break through to new heights.</p>
</sec>
<sec>
<title>Medical health</title>
<p>The application of brain-like computing in medical field mainly relies on the development and application of brain&#x02013;computer interface (Mudgal et al., <xref ref-type="bibr" rid="B94">2020</xref>; Huang D. et al., <xref ref-type="bibr" rid="B65">2022</xref>). It is reflected in the following four aspects: monitoring, improvement, replacement, and enhancement.</p>
<p>Monitoring means that the brain&#x02013;computer interface system completes the real-time monitoring and measurement of the human nervous system state (Miko&#x00142;ajewska and Miko&#x00142;ajewski, <xref ref-type="bibr" rid="B92">2014</xref>; Shiotani et al., <xref ref-type="bibr" rid="B116">2016</xref>; Olaronke et al., <xref ref-type="bibr" rid="B97">2018</xref>; Sengupta et al., <xref ref-type="bibr" rid="B112">2020</xref>). It can help grade consciousness in patients in a deep coma and measure the state of neural pathways in patients with visual/auditory impairment. Improvement means that we can provide recovery training for ADHD, stroke, epilepsy, and other conditions (Cheng et al., <xref ref-type="bibr" rid="B30">2020</xref>). After doctors detect abnormal neuronal discharges through brain&#x02013;computer interface technology, they can apply the appropriate electrical stimulation to the brain to suppress seizures. &#x0201C;replacement&#x0201D; is primarily for patients who have lost some function due to injury or disease. For example, people who have lost the ability to speak or speech can express themselves through a brain&#x02013;computer interface (Ramakuri et al., <xref ref-type="bibr" rid="B103">2019</xref>; Czech, <xref ref-type="bibr" rid="B33">2021</xref>); groups of people with severe motor disabilities can communicate what they are thinking in their heads through a brain&#x02013;computer interface system. &#x0201C;enhancement&#x0201D; refers to the strengthening of brain functions by implanting chips into the brain (Kotchetkov et al., <xref ref-type="bibr" rid="B74">2010</xref>). For example, it enhances memory and helps a person to call mechanical devices directly.</p>
</sec>
<sec>
<title>Intelligence education</title>
<p>The education and development of children is an important issue of social concern. But the research on children&#x00027;s development and psychological problems has been conducted only through dialog and observation. Brain-like computing research hopes to directly observe the corresponding brain waves and decoding of brain activity.</p>
<p>In the &#x0201C;Brain Science and Brain-like Research&#x0201D; project guidelines, the state mentions the use of brain-like technology to study the mental health of children and adolescents, including the interaction between emotional problems and cognitive abilities and their brain mechanisms, the development of screening tools and early warning systems for emotional problems in children and adolescents by combining machine learning (Dwyer et al., <xref ref-type="bibr" rid="B41">2018</xref>; Yang J. et al., <xref ref-type="bibr" rid="B132">2020</xref>; Du et al., <xref ref-type="bibr" rid="B40">2021</xref>; Paton and Tiffin, <xref ref-type="bibr" rid="B99">2022</xref>) and other means, and encouraging the integration of medicine and education. Eventually, we will develop psychological intervention and regulation tools for children and adolescents&#x00027; emotional problems, and establish a platform for monitoring and intervening in children and adolescents&#x00027; psychological crises based on multi-level systems, such as schools and medical care.</p>
</sec>
<sec>
<title>Intelligent transportation</title>
<p>Nowadays, self-driving cars have many sensors, including radar, infrared, camera, GPS, and so on, but the car still does not have the ability to make the right decision like a human. Humans only need to use vision and hearing among their senses to ensure the safe driving of the vehicle. The human brain has powerful synchronous and asynchronous processing capabilities for reasonable scheduling, and human eye recognition is more accurate than all current camera recognition.</p>
<p>Inspired by the way neurons in the biological retina transmit information, Mahowald and Mead proposed in the early 1990s an asynchronous signal transmission method called AER (Tsodyks et al., <xref ref-type="bibr" rid="B122">1998</xref>; Service, <xref ref-type="bibr" rid="B113">2014</xref>). When a pixel in a pixel array occurs an &#x0201C;event,&#x0201D; the position of this pixel is output with the &#x0201C;event.&#x0201D; Based on this principle, the Dynamic Vision Sensor (DVS) (Amunts et al., <xref ref-type="bibr" rid="B10">2016</xref>) was developed at the University of Zurich, Switzerland, to detect changes in the brightness of pixels in an image. The low bandwidth of DVS gives it a natural advantage in the field of robot vision, and work has been done to use it in autonomous walking vehicles and autonomous vehicles. Dr. Shoushun Chen of Nanyang Technological University, Singapore, developed an asynchronous sensing chip with a temporal sensitivity of 25 nanoseconds (Schemmel et al., <xref ref-type="bibr" rid="B108">2003</xref>, <xref ref-type="bibr" rid="B109">2010</xref>; Scholze et al., <xref ref-type="bibr" rid="B111">2012</xref>). The brain-like cochlea (Scholze et al., <xref ref-type="bibr" rid="B111">2012</xref>) is a brain-like auditory sensor based on a similar principle that can be used for sound recognition and localization. The results of all these studies will accelerate the implementation of autonomous driving and ensure the safety of the autonomous driving process.</p>
</sec>
<sec>
<title>Military applications</title>
<p>Brain-like chips have the technical potential for ultra-low-power consumption, massively parallel computing, and real-time information processing. It has unique advantages in military application scenarios, especially in conditions with strong constraints on performance, speed, and power consumption. It can be used for ultra-low latency dynamic visual recognition against military targets in the sky, and the formation of a cognitive supercomputer to achieve rapid processing of massive amounts of data (Czech, <xref ref-type="bibr" rid="B33">2021</xref>). In addition, brain-like computing can be used for intelligent gaming confrontation and decision-making in the future battlefield.</p>
<p>The ultra-low-power consumption, ultra-low latency, real-time high-speed dynamic visual recognition, tracking technology, and sensor information processing technology of the brain-like chip is a key technology at the strategic level of national defense science and technology. Especially the ultra-low latency real-time high-speed dynamic visual recognition technology has an extremely important role in the field of high-speed dynamic recognition. In 2014, the U.S. Air Force signed a contract with IBM to make high-altitude flying targets efficient and low powered through brain-like computing. The U.S. Air Force Research Laboratory began developing a brain-like supercomputer using IBM&#x00027;s True North brain-like chip in June 2017. In the following year, the laboratory released the world&#x00027;s largest neuromorphic supercomputer, the Blue Jay. The computer can simultaneously simulate 64 million biological neurons and 16 billion biological synapses, and power consumption is only 40 watts, 100 times lower than traditional supercomputers. They plan to demonstrate an airborne target recognition application developed by the Blue Jay in 2019. By 2024, they will enable real-time analysis of 10 times more big data than current global Internet traffic. This turns the big data that constrain the next generation of warplanes from a problem to a resource and greatly shortens the development cycle of defense technology and engineering.</p>
</sec>
</sec>
<sec id="s6">
<title>Challenges</title>
<sec>
<title>Novel observation and simultaneous modulation techniques for brain activity face challenges</title>
<p>Brain observation and regulation technology are an important technical means to understand the input, transmission, and output mechanism of brain information and is also the core technical support to understand, simulate, and enhance the brain. Although various <italic>in vivo</italic> means of acquiring and modulating brain neural information by MRI, optical/optical genetic imaging, and other technologies are becoming more abundant and rapidly developed, the following problems still exist in current research: single observation mode and modulation means, partial observation information, lack of knowledge of brain function, inability to synchronize brain modulation, and observation.</p>
<sec>
<title>The mathematical principles and computational models of brain information processing are not well developed</title>
<p>Neuroscientists have a clearer understanding of the single neuron model, the principles of partial neural loop information transfer, and the mechanisms of primary perceptual functions. But the global information processing in the brain, especially the understanding of higher cognitive functions, is still very sketchy (Aimone, <xref ref-type="bibr" rid="B7">2021</xref>). To build a computational model that can explain the brain information processing process and perform cognitive tasks, we must understand the mathematical principles and brain information processing.</p>
</sec>
<sec>
<title>Immature hardware processes for brain-like computing</title>
<p>The use of hardware to simulate brain-like computational processes still faces an important challenge in terms of brain-like architectures, devices, and chips. On the one hand, CMOS and other traditional processes have encountered bottlenecks in on-chip storage density and power consumption (Chauhan, <xref ref-type="bibr" rid="B27">2019</xref>), while new nano-devices still have outstanding problems, such as poor process stability and difficulty in scaling. Brain-like materials and devices require new technologies to break through current bottlenecks (Chen L. et al., <xref ref-type="bibr" rid="B28">2021</xref>; Chen T. et al., <xref ref-type="bibr" rid="B29">2021</xref>; Wang et al., <xref ref-type="bibr" rid="B128">2021</xref>; Zhang et al., <xref ref-type="bibr" rid="B139">2021</xref>). On the other hand, brain-like systems require tens of billions of neurons to work together. However, the existing brain-like chip is difficult to achieve large-scale interconnected integration of neurons and efficient real-time transmission of neuronal pulse information under the constraints of limited hardware resources and limited energy consumption.</p>
</sec>
<sec>
<title>The efficiency of the existing human brain thinking answers needs to be improved urgently</title>
<p>Due to the complexity of the brain and the great difference between brain and machine, it brings poor robustness of brain signal acquisition, low efficiency of brain&#x02013;machine interaction, lack of brain intelligence intervention means, high requirement of brain area intervention targets, and difficulty of fusion system construction. Given the complementary nature of machine intelligence and human intelligence, how to efficiently interpret the information transmitted by the human brain, realize the interconnection of biological intelligence and machine intelligence, integrate their respective strengths, and create intelligent forms with stronger performance are the main challenges of brain-like research (Guo and Yang, <xref ref-type="bibr" rid="B58">2022</xref>).</p>
</sec>
</sec>
</sec>
<sec id="s7">
<title>Prospects</title>
<sec>
<title>Brain-like computing model</title>
<p>The study of brain-like computing models (Voutsas et al., <xref ref-type="bibr" rid="B127">2005</xref>) is an important foundation of brain-like computing, which determines the upper limit of neuromorphic computing from the bottom, mainly divided into neuron models, neural network models, and their learning methods. We can look forward to the development of brain-like computational models in the following directions: studying the dynamic coding mechanisms of biological neurons and neural networks, establishing efficient spike coding theories and models with biological rationality, multimodal coordination, and joint representation at multiple spatial and temporal scales; studying and exploring the coordination mechanisms of multisynaptic plasticity, the mechanisms of cross-scale learning plasticity, and the global plasticity mechanisms of biological neural networks; establish efficient learning theories and algorithms for deep SNNs to realize intelligent learning, reasoning, and decision-making under multi-cognitive collaborative tasks of brain-like; to study mathematical descriptions of different levels of brain organization and continuous learning strategies under multi-temporal neural computational scales to realize rapid association, transfer, and storage of multimodal information.</p>
</sec>
<sec>
<title>Neuromorphic devices</title>
<p>The current development of artificial neuromorphic devices mainly includes two technical routes. One is based on the traditional mature CMOS technology of SRAM or DRAM build (Asghar et al., <xref ref-type="bibr" rid="B12">2021</xref>), and the prototype device is volatile in terms of information storage; the other is built based on non-volatile Flexible FLASH devices or new memory devices and new materials (Feng et al., <xref ref-type="bibr" rid="B43">2021</xref>; He et al., <xref ref-type="bibr" rid="B62">2021</xref>). Non-volatile neuromorphic devices are memristors with artificial neuromorphic characteristics and unique nonlinear properties that have become new basic information processing units that mimic biological neurons and synapses (Yang et al., <xref ref-type="bibr" rid="B133">2013</xref>; Prezioso et al., <xref ref-type="bibr" rid="B102">2015</xref>). In future inquiries, we need to clarify some basic questions: in neural operations, which level is needed to simulate the neural properties of organisms, and which functions are primary in neuromorphic operations. These issues are critical to the implementation of neural computing.</p>
</sec>
<sec>
<title>Neuromorphic computing chips</title>
<p>Artificial neural network chips have made progress in practical applications, whereas pulsed neural network chips are still in the exploratory application stage. Future research on neuromorphic chips can try to study neuromorphic computing chips from several different directions, such as architecture, operation method, and peripheral circuit technology of neuromorphic component arrays suitable for convolutional and matrix operations, hardware description, mapping scheme of neural network algorithm to new neuromorphic component arrays, compensation algorithm and circuit compensation method for various non-ideal factors of new neuromorphic components, and data routing method between arrays.</p>
</sec>
<sec>
<title>Neuromorphic computing supporting system</title>
<p>However, the results are not satisfactory in practical applications. For example, the efficiency of online learning is much lower than the speed of neural computing, and the efficiency and accuracy of SNN learning are not as good as traditional ANN. We can study the high-efficiency deployment of neural network learning training algorithms, the compensation method of learning performance loss during the process of computing efficiency improvement, and carry out flow verification of prototype prototypes; we can build a large-scale brain-like computing chip simulation platform with online learning functions and demonstrate a variety of online brain-like chip-based learning applications. Develop the potential of neuromorphic computing in terms of platforms, systems, applications, and algorithms.</p>
<p>At present, brain-like computing technology is still a certain distance away from being formally put into industrial production (Zou et al., <xref ref-type="bibr" rid="B141">2021</xref>), but it will certainly be one of the important points of contention between various countries and enterprises in the next 10 years. So, this is an opportunity for all researchers, and whether it can be truly applied in production life depends on the researchers&#x00027; key research results in certain aspects. We hope that we researchers will achieve a technological breakthrough to bring brain-like to life as soon as possible.</p>
</sec>
</sec>
<sec id="s8">
<title>Author contributions</title>
<p>WO: conception and design of study. CZ: participated in the literature collection, collation of the article, and was responsible for the second and third revision of the article. SX: acquisition of data. WH: analysis and interpretation of data. QZ: revising the manuscript critically for important intellectual content. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>This work was supported in part by the Hainan Provincial Natural Science Foundation of China (621RC508), Henan Key Laboratory of Network Cryptography Technology (Grant/Award Number: LNCT2021-A16), and the Science Project of Hainan University (KYQD(ZR)-21075).</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aamir</surname> <given-names>S. A.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>P.</given-names></name> <name><surname>Hartel</surname> <given-names>A.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>A highly tunable 65-nm CMOS LIF neuron for a largescale neuromorphic system</article-title>, in <source>ESSCIRC Conference 2016, 42nd. European Solid-State Circuits Conference</source> (<publisher-loc>IEEE</publisher-loc>), p. <fpage>71</fpage>&#x02013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1109/ESSCIRC.2016.7598245</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aamir</surname> <given-names>S. A.</given-names></name> <name><surname>Stradmann</surname> <given-names>Y.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>P.</given-names></name> <name><surname>Pehle</surname> <given-names>C.</given-names></name> <name><surname>Hartel</surname> <given-names>A.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>An accelerated lif neuronal network array for a large-scale mixed-signal neuromorphic architecture</article-title>. <source>IEEE Trans. Circuits Syst. I Regular Papers.</source> <volume>65</volume>, <fpage>4299</fpage>&#x02013;<lpage>4312</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2018.2840718</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>1999</year>). <article-title>Lapicque&#x00027;s introduction of the integrate-and-fire model neuron</article-title>. <source>Brain Res. Bull.</source> <volume>50</volume>, <fpage>303</fpage>&#x02013;<lpage>304</lpage>. <pub-id pub-id-type="doi">10.1016/S0361-9230(99)00161-6</pub-id><pub-id pub-id-type="pmid">10643408</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Abramsky</surname> <given-names>S.</given-names></name> <name><surname>Tzevelekos</surname> <given-names>N.</given-names></name></person-group> (<year>2010</year>). <source>Introduction To Categories and Categorical Logic/New Structures For Physics.</source> <publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer, p</publisher-name>. <fpage>3</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-12821-9_1</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adam</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). <article-title>The moments of the Gompertz distribution and maximum likelihood estimation of its parameters</article-title>. <source>Scand Actuarial J.</source> <volume>23</volume>, <fpage>255</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1080/03461238.2012.687697</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agebure</surname> <given-names>M. A.</given-names></name> <name><surname>Wumnaya</surname> <given-names>P. A.</given-names></name> <name><surname>Baagyere</surname> <given-names>E. Y.</given-names></name></person-group> (<year>2021</year>). <article-title>A survey of supervised learning models for spiking neural network</article-title>. <source>Asian J. Res. Comput. Sci</source>. <volume>9</volume>, <fpage>35</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.9734/ajrcos/2021/v9i430228</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aimone</surname> <given-names>J. B. A.</given-names></name></person-group> (<year>2021</year>). <article-title>Roadmap for reaching the potential of brain-derived computing</article-title>. <source>Adv. Intell. Syst.</source> <volume>3</volume>, <fpage>2000191</fpage>. <pub-id pub-id-type="doi">10.1002/aisy.202000191</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allo</surname> <given-names>C. B.</given-names></name> <name><surname>Otok</surname> <given-names>B. W.</given-names></name></person-group> (<year>2019</year>). <article-title>Estimation parameter of generalized poisson regression model using generalized method of moments and its application</article-title>. <source>IOP Conf. Ser. Mater. Sci. Eng.</source> <volume>546</volume>, <fpage>50</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1088/1757-899X/546/5/052050</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amundson</surname> <given-names>J.</given-names></name> <name><surname>Annis</surname> <given-names>J.</given-names></name> <name><surname>Avestruz</surname> <given-names>C.</given-names></name> <name><surname>Bowring</surname> <given-names>D.</given-names></name> <name><surname>Caldeira</surname> <given-names>J.</given-names></name> <name><surname>Cerati</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>1911</year>). <source>Response to NITRD, NCO, NSF Request for Information on &#x0201C;Update to the 2016 National Artificial Intelligence Research and Development Strategic Plan</source>&#x0201D;. arXiv preprint arXiv:05796, 2019. <pub-id pub-id-type="doi">10.2172/1592156</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amunts</surname> <given-names>K.</given-names></name> <name><surname>Ebell</surname> <given-names>C.</given-names></name> <name><surname>Muller</surname> <given-names>J.</given-names></name> <name><surname>Telefont</surname> <given-names>M.</given-names></name> <name><surname>Knoll</surname> <given-names>A.</given-names></name> <name><surname>Lippert</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>The human brain project: creating a European research infrastructure to decode the human brain</article-title>. <source>Neuron</source> <volume>92</volume>, <fpage>574</fpage>&#x02013;<lpage>581</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2016.10.046</pub-id><pub-id pub-id-type="pmid">27809997</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andreopoulos</surname> <given-names>A.</given-names></name> <name><surname>Kashyap</surname> <given-names>H. J.</given-names></name> <name><surname>Nayak</surname> <given-names>T. K.</given-names></name> <name><surname>Amir</surname> <given-names>A.</given-names></name> <name><surname>Flickner</surname> <given-names>M. D.</given-names></name></person-group> (<year>2018</year>). <article-title>A low power, high throughput, fully event-based stereo system</article-title>. In <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>. p. <fpage>7532</fpage>&#x02013;<lpage>7542</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2018.00786</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Asghar</surname> <given-names>M. S.</given-names></name> <name><surname>Arslan</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>A low-power spiking neural network chip based on a compact LIF neuron and binary exponential charge injector synapse circuits</article-title>. <source>Sensors</source> <volume>21</volume>, <fpage>4462</fpage>. <pub-id pub-id-type="doi">10.3390/s21134462</pub-id><pub-id pub-id-type="pmid">34210045</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benchehida</surname> <given-names>C.</given-names></name> <name><surname>Benhaoua</surname> <given-names>M. K.</given-names></name> <name><surname>Zahaf</surname> <given-names>H. E.</given-names></name> <name><surname>Lipari</surname> <given-names>G.</given-names></name></person-group> (<year>2022</year>). <article-title>Memory-processor co-scheduling for real-time tasks on network-on-chip manycore architectures</article-title>. <source>Int. J. High Perf. Syst. Architect.</source> <volume>11</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1504/IJHPSA.2022.121877</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benjamin</surname> <given-names>B. V.</given-names></name> <name><surname>Gao</surname> <given-names>P.</given-names></name> <name><surname>McQuinn</surname> <given-names>E.</given-names></name> <name><surname>Choudhary</surname> <given-names>S.</given-names></name> <name><surname>Chandrasekaran</surname> <given-names>A. R.</given-names></name> <name><surname>Bussat</surname> <given-names>J. M.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations</article-title>. <source>Proc. IEEE.</source> <volume>102</volume>, <fpage>699</fpage>&#x02013;<lpage>716</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2014.2313565</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>G.</given-names></name> <name><surname>Poo</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <article-title>Distributed synaptic modification in neural networks induced by patterned stimulation</article-title>. <source>Nature.</source> <volume>401</volume>, <fpage>792</fpage>&#x02013;<lpage>796</lpage>. <pub-id pub-id-type="doi">10.1038/44573</pub-id><pub-id pub-id-type="pmid">10548104</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Birkhoff</surname> <given-names>G.</given-names></name></person-group> (<year>1940</year>). <source>Lattice Theory</source>. <publisher-loc>Rhode Island, RI</publisher-loc>: <publisher-name>American Mathematical Society</publisher-name>. <pub-id pub-id-type="doi">10.1090/coll/025</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boahen</surname> <given-names>K. A.</given-names></name></person-group> (<year>2000</year>). <article-title>Point-to-point connectivity between neuromorphic chips using address events</article-title>. <source>IEEE Trans. Circuits Syst. II Analog Digital Signal Process.</source> <volume>47</volume>, <fpage>416</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1109/82.842110</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boahen</surname> <given-names>K. A.</given-names></name> <name><surname>Pouliquen</surname> <given-names>P. O.</given-names></name> <name><surname>Andreou</surname> <given-names>A. G.</given-names></name> <name><surname>Jenkins</surname> <given-names>R. E. A.</given-names></name></person-group> (<year>1989</year>). <article-title>heteroassociative memory using current-mode MOS analog VLSI circuits</article-title>. <source>IEEE Trans Circuits Syst.</source> <volume>36</volume>, <fpage>747</fpage>&#x02013;<lpage>755</lpage>. <pub-id pub-id-type="doi">10.1109/31.31323</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boybat</surname> <given-names>I.</given-names></name> <name><surname>Gallo</surname> <given-names>M. L.</given-names></name> <name><surname>Nandakumar</surname> <given-names>S. R.</given-names></name> <name><surname>Moraitis</surname> <given-names>T.</given-names></name> <name><surname>Parnell</surname> <given-names>T.</given-names></name> <name><surname>Tuma</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Neuromorphic computing with multi-memristive synapses</article-title>. <source>Nat. Commun.</source> <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-018-04933-y</pub-id><pub-id pub-id-type="pmid">29955057</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brette</surname> <given-names>R.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2005</year>). <article-title>Adaptive exponential integrate-and-fire model as an effective description of neuronal activity</article-title>. <source>J. Neurophysiol.</source> <volume>94</volume>, <fpage>3637</fpage>&#x02013;<lpage>3642</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00686.2005</pub-id><pub-id pub-id-type="pmid">16014787</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burkitt</surname> <given-names>A. N.</given-names></name></person-group> (<year>2006</year>). <article-title>A review of the integrate-and-fire neuron model: I. homogeneous synaptic input</article-title>. <source>Biol. Cybern.</source> <volume>95</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-006-0068-6</pub-id><pub-id pub-id-type="pmid">16622699</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butts</surname> <given-names>D. A.</given-names></name> <name><surname>Weng</surname> <given-names>C.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Yeh</surname> <given-names>C. I.</given-names></name> <name><surname>Lesica</surname> <given-names>N. A.</given-names></name> <name><surname>Alonso</surname> <given-names>J. M.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Temporal precision in the neural code and the timescales of natural vision</article-title>. <source>Nature</source> <volume>449</volume>, <fpage>92</fpage>&#x02013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1038/nature06105</pub-id><pub-id pub-id-type="pmid">17805296</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Khosla</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Spiking deep convolutional neural networks for energy-efficient object recognition</article-title>. <source>Int. J. Comput. Vis.</source> <volume>113</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-014-0788-3</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Casali</surname> <given-names>S.</given-names></name> <name><surname>Marenzi</surname> <given-names>E.</given-names></name> <name><surname>Medini</surname> <given-names>C.</given-names></name> <name><surname>Casellato</surname> <given-names>C.</given-names></name> <name><surname>D&#x00027;Angelo</surname> <given-names>E.</given-names></name></person-group> (<year>2019</year>). <article-title>Reconstruction and simulation of a scaffold model of the cerebellar network</article-title>. <source>Front. Neuroinform.</source> <volume>13</volume>, <fpage>37</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2019.00037</pub-id><pub-id pub-id-type="pmid">31354466</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chai</surname> <given-names>L.</given-names></name> <name><surname>Gao</surname> <given-names>Q.</given-names></name> <name><surname>Panda</surname> <given-names>D. K.</given-names></name></person-group> (<year>2007</year>). <article-title>Understanding the impact of multi-core architecture in cluster computing: a case study with intel dual-core system</article-title>, in <source>Seventh IEEE International Symposium On Cluster Computing and the Grid</source> CCGrid&#x00027;07 (IEEE), p. <fpage>471</fpage>&#x02013;<lpage>478</lpage>. <pub-id pub-id-type="doi">10.1109/CCGRID.2007.119</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chaparro</surname> <given-names>P.</given-names></name> <name><surname>Gonz&#x000E1;les</surname> <given-names>J.</given-names></name> <name><surname>Magklis</surname> <given-names>G.</given-names></name> <name><surname>Cai</surname> <given-names>Q.</given-names></name> <name><surname>Gonz&#x000E1;lez</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Understanding the thermal implications of multi-core architectures</article-title>. <source>IEEE Trans. Parallel Distrib. Syst.</source> <volume>18</volume>, <fpage>1055</fpage>&#x02013;<lpage>1065</lpage>. <pub-id pub-id-type="doi">10.1109/TPDS.2007.1092</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chauhan</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Neuromorphic computing hardware: a review</article-title>. <source>J. Homepage</source> <volume>2582</volume>, <fpage>7421</fpage>.</citation>
</ref>
<ref id="B28">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>T. Y.</given-names></name> <name><surname>Ding</surname> <given-names>S. J.</given-names></name> <name><surname>Zhang</surname> <given-names>D. W.</given-names></name></person-group> (<year>2021</year>). <source>ALD Based Flexible Memristive Synapses for Neuromorphic Computing Application.</source> ECS Meeting Abstracts (<publisher-loc>Bristol</publisher-loc>: <publisher-name>IOP Publishing</publisher-name>), <fpage>874</fpage>. <pub-id pub-id-type="doi">10.1149/MA2021-0229874mtgabs</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Hao</surname> <given-names>D.</given-names></name> <name><surname>Dai</surname> <given-names>S.</given-names></name> <name><surname>Ou</surname> <given-names>Q.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Photonic synapses with ultra-low energy consumption based on vertical organic field-effect</article-title>. <source>Trans. Adv. Opt. Mater.</source> <volume>9</volume>, <fpage>2002030</fpage>. <pub-id pub-id-type="doi">10.1002/adom.202002030</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheng</surname> <given-names>N.</given-names></name> <name><surname>Phua</surname> <given-names>K. S.</given-names></name> <name><surname>Lai</surname> <given-names>H. S.</given-names></name> <name><surname>Tam</surname> <given-names>P. K.</given-names></name> <name><surname>Tang</surname> <given-names>K. Y.</given-names></name> <name><surname>Cheng</surname> <given-names>K. K.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Brain-computer interface-based soft robotic glove rehabilitation for stroke</article-title>. <source>IEEE Trans. Bio-Med. Eng.</source> <volume>67</volume>, <fpage>3339</fpage>&#x02013;<lpage>3351</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.2020.2984003</pub-id><pub-id pub-id-type="pmid">32248089</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>T. Y.</given-names></name> <name><surname>Merolla</surname> <given-names>P. A.</given-names></name> <name><surname>Arthur</surname> <given-names>J. V.</given-names></name> <name><surname>Boahen</surname> <given-names>K. A.</given-names></name> <name><surname>Shi</surname> <given-names>B. E.</given-names></name></person-group> (<year>2005</year>). <article-title>Neuromorphic implementation of orientation hypercolumns</article-title>. <source>IEEE Trans. Circuits Syst. I Regular Papers.</source> <volume>52</volume>, <fpage>1049</fpage>&#x02013;<lpage>1060</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2005.849136</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname> <given-names>G.</given-names></name> <name><surname>Ye</surname> <given-names>W.</given-names></name> <name><surname>Hou</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>T.</given-names></name> <name><surname>Liu</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Research on low-power main control chip architecture based on edge computing technology</article-title>. <source>J. Phys. Conf. Ser.</source> <volume>1802</volume>, <fpage>31</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1088/1742-6596/1802/3/032031</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Czech</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Brain-computer interface use to control military weapons and tools</article-title>, <source>International Scientific Conference on Brain-Computer Interfaces BCI Opole</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), p. <fpage>196</fpage>&#x02013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-72254-8_20</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Das</surname> <given-names>R.</given-names></name> <name><surname>Biswas</surname> <given-names>C.</given-names></name> <name><surname>Majumder</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>Study of spiking neural network architecture for neuromorphic computing</article-title>, in <source>2022 IEEE 11th International Conference on Communication, Systems and Network Technologies CSNT</source> (<publisher-loc>IEEE</publisher-loc>), p. <fpage>373</fpage>&#x02013;<lpage>379</lpage>. <pub-id pub-id-type="doi">10.1109/CSNT54456.2022.9787590</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delponte</surname> <given-names>L.</given-names></name> <name><surname>Tamburrini</surname> <given-names>G.</given-names></name></person-group> (<year>2018</year>). <article-title>European artificial intelligence (AI) leadership, the path for an integrated vision</article-title>. <source>European Parliament.</source></citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dennis</surname> <given-names>J.</given-names></name> <name><surname>Yu</surname> <given-names>Q.</given-names></name> <name><surname>Tang</surname> <given-names>H.</given-names></name> <name><surname>Tran</surname> <given-names>H. D.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Temporal coding of local spectrogram features for robust sound recognition</article-title>, in <source>2013 IEEE International Conference on Acoustics, Speech and Signal Processing</source> (<publisher-loc>IEEE</publisher-loc>), p. <fpage>803</fpage>&#x02013;<lpage>807</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2013.6637759</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diehl</surname> <given-names>P. U.</given-names></name> <name><surname>Neil</surname> <given-names>D.</given-names></name> <name><surname>Binas</surname> <given-names>J.</given-names></name> <name><surname>Cook</surname> <given-names>M.</given-names></name> <name><surname>Liu</surname> <given-names>S. C.</given-names></name> <name><surname>Pfeiffer</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <source>Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing</source>. <pub-id pub-id-type="doi">10.1109/IJCNN.2015.7280696</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ding</surname> <given-names>C.</given-names></name> <name><surname>Huan</surname> <given-names>Y.</given-names></name> <name><surname>Jia</surname> <given-names>H.</given-names></name> <name><surname>Yan</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>F.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>A hybrid-mode on-chip router for the large-scale FPGA-based neuromorphic platform</article-title>. <source>IEEE Trans. Circuits Syst. I Regular Papers</source> <volume>69</volume>, <fpage>1990</fpage>&#x02013;<lpage>2001</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2022.3145016</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dobs</surname> <given-names>K.</given-names></name> <name><surname>Martinez</surname> <given-names>J.</given-names></name> <name><surname>Kell</surname> <given-names>A. J. E.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>2022</year>). <article-title>Brain-like functional specialization emerges spontaneously in deep neural networks</article-title>. <source>Sci. Adv.</source> <volume>8</volume>, <fpage>eabl8913</fpage>. <pub-id pub-id-type="doi">10.1126/sciadv.abl8913</pub-id><pub-id pub-id-type="pmid">35294241</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Du</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name> <name><surname>Balamurugan</surname> <given-names>P.</given-names></name> <name><surname>Selvaraj</surname> <given-names>P.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning-based mental health monitoring scheme for college students using convolutional neural network</article-title>. <source>Int. J. Artificial Intell. Tools</source> <volume>30</volume>, <fpage>06n</fpage>08 <pub-id pub-id-type="doi">10.1142/S0218213021400145</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dwyer</surname> <given-names>D. B.</given-names></name> <name><surname>Falkai</surname> <given-names>P.</given-names></name> <name><surname>Koutsouleris</surname> <given-names>N.</given-names></name></person-group> (<year>2018</year>). <article-title>Machine learning approaches for clinical, psychology and psychiatry</article-title>. <source>Annual Rev. Clin. Psychol.</source> <volume>14</volume>, <fpage>91</fpage>&#x02013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-clinpsy-032816-045037</pub-id><pub-id pub-id-type="pmid">29401044</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ecco</surname> <given-names>L.</given-names></name> <name><surname>Ernst</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>Tackling the bus turnaround overhead in real-time SDRAM controllers</article-title>. <source>IEEE Trans. Comput.</source> <volume>66</volume>, <fpage>1961</fpage>&#x02013;<lpage>1974</lpage>. <pub-id pub-id-type="doi">10.1109/TC.2017.2714672</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>C.</given-names></name> <name><surname>Gu</surname> <given-names>J.</given-names></name> <name><surname>Zhu</surname> <given-names>H.</given-names></name> <name><surname>Ying</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Pan</surname> <given-names>D. Z.</given-names></name> <etal/></person-group>. (<year>2021</year>). <source>Silicon Photonic Subspace Neural Chip for Hardware-Efficient Deep Learning</source>.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Florian</surname> <given-names>R. V.</given-names></name></person-group> (<year>2012</year>). <source>The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns</source>. <pub-id pub-id-type="doi">10.1371/journal.pone.0040233</pub-id><pub-id pub-id-type="pmid">22879876</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedmann</surname> <given-names>S.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Grubl</surname> <given-names>A.</given-names></name> <name><surname>Hartel</surname> <given-names>A.</given-names></name> <name><surname>Hock</surname> <given-names>M.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Demonstrating hybrid learning in a flexible neuromorphic hardware system</article-title>. <source>IEEE Trans. Biomed. Circuits Syst.</source> <volume>11</volume>, <fpage>128</fpage>&#x02013;<lpage>142</lpage>. <pub-id pub-id-type="doi">10.1109/TBCAS.2016.2579164</pub-id><pub-id pub-id-type="pmid">28113678</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gardner</surname> <given-names>B.</given-names></name> <name><surname>Gruning</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Classifying patterns in a spiking neural network</article-title>, in <source>Proceedings of the 22nd European Symposium on Artificial Neural Networks (ESANN2014)</source> (<publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>23</fpage>&#x02013;<lpage>28</lpage>.</citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garrido</surname> <given-names>M.</given-names></name> <name><surname>Pirsch</surname> <given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>Continuous-flow matrix transposition using memories</article-title>. <source>IEEE Trans. Circuits Syst. I Regular Papers</source> <volume>67</volume>, <fpage>3035</fpage>&#x02013;<lpage>3046</lpage>. <pub-id pub-id-type="doi">10.1109/TCSI.2020.2987736</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geminiani</surname> <given-names>A.</given-names></name> <name><surname>Casellato</surname> <given-names>C.</given-names></name> <name><surname>D&#x00027;Angelo</surname> <given-names>E.</given-names></name> <name><surname>Pedrocchi</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Complex electroresponsive dynamics in olivocerebellar neurons represented with extended-generalized leaky integrate and fire models</article-title>. <source>Front. Comput. Neurosci.</source> <volume>13</volume>, <fpage>35</fpage>. <pub-id pub-id-type="doi">10.3389/fncom.2019.00035</pub-id><pub-id pub-id-type="pmid">31379546</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Kistler</surname> <given-names>W. M.</given-names></name></person-group> (<year>2002</year>). <source>Spiking Neuron Models: Single Neurons, Populations, Plasticity</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511815706</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghosh-Dastidar</surname> <given-names>S.</given-names></name> <name><surname>Adeli</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection</article-title>. <source>Neural Netw.</source> <volume>22</volume>, <fpage>1419</fpage>&#x02013;<lpage>1431</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2009.04.003</pub-id><pub-id pub-id-type="pmid">19447005</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giitig</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>To spike, or when to spike</article-title>. <source>Current Opinion Neurobiol.</source> <volume>25</volume>, <fpage>134</fpage>&#x02013;<lpage>139</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2014.01.004</pub-id><pub-id pub-id-type="pmid">24468508</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gjorgjieva</surname> <given-names>J.</given-names></name> <name><surname>Clopath</surname> <given-names>C.</given-names></name> <name><surname>Audet</surname> <given-names>J.</given-names></name> <name><surname>Pfister</surname> <given-names>J-. P.</given-names></name></person-group> (<year>2011</year>). <article-title>A triplet spike-timing&#x02013;dependent plasticity model generalizes the Bienenstock&#x02013;Cooper&#x02013;Munro rule to higher-order spatiotemporal correlations</article-title>. <source>Proc. Natl. Acad. Sci.</source> <volume>108</volume>, <fpage>19383</fpage>&#x02013;<lpage>19388</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1105933108</pub-id><pub-id pub-id-type="pmid">22080608</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gleeson</surname> <given-names>P.</given-names></name> <name><surname>Cantarelli</surname> <given-names>M.</given-names></name> <name><surname>Marin</surname> <given-names>B.</given-names></name> <name><surname>Quintana</surname> <given-names>A.</given-names></name> <name><surname>Earnshaw</surname> <given-names>M.</given-names></name> <name><surname>Sadeh</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Open source brain: a collaborative resource for visualizing, analyzing, simulating, and developing standardized models of neurons and circuits</article-title>. <source>Neuron</source> <volume>103</volume>, <fpage>395</fpage>&#x02013;<lpage>411</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.05.019</pub-id><pub-id pub-id-type="pmid">31201122</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goossens</surname> <given-names>S.</given-names></name> <name><surname>Chandrasekar</surname> <given-names>K.</given-names></name> <name><surname>Akesson</surname> <given-names>B.</given-names></name> <name><surname>Goossens</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Power/performance trade-offs in real-time SDRAM command scheduling</article-title>. <source>IEEE Trans. Comput.</source> <volume>65</volume>, <fpage>1882</fpage>&#x02013;<lpage>1895</lpage>. <pub-id pub-id-type="doi">10.1109/TC.2015.2458859</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grassia</surname> <given-names>F.</given-names></name> <name><surname>Levi</surname> <given-names>T.</given-names></name> <name><surname>Doukkali</surname> <given-names>E.</given-names></name> <name><surname>Kohno</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Spike pattern recognition using artificial neuron and spike-timing-dependent plasticity implemented on a multi-core embedded platform</article-title>. <source>Artificial Life Robot.</source> <volume>23</volume>, <fpage>200</fpage>&#x02013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1007/s10015-017-0421-y</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Billaudelle</surname> <given-names>S.</given-names></name> <name><surname>Cramer</surname> <given-names>B.</given-names></name> <name><surname>Karasenko</surname> <given-names>V.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Verification and design methods for the brainscales neuromorphic hardware system</article-title>. <source>J. Signal Process. Syst.</source> <volume>92</volume>, <fpage>1277</fpage>&#x02013;<lpage>1292</lpage>. <pub-id pub-id-type="doi">10.1007/s11265-020-01558-7</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>Z. H.</given-names></name> <name><surname>Pan</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Brain computing based on neural mimicry</article-title>. <source>Commun CCF</source>. <volume>11</volume>, <fpage>10</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="pmid">36134138</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>C.</given-names></name> <name><surname>Yang</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <source>Preliminary Concept of General Intelligent Network (GIN) for Brain-Like Intelligence.</source></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>G&#x000FC;tig</surname> <given-names>R.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2006</year>). <article-title>The tempotron: a neuron that learns spike timing&#x02013;based decisions</article-title>. <source>Nat. Neurosci.</source> <volume>9</volume>, <fpage>420</fpage>&#x02013;<lpage>428</lpage>. <pub-id pub-id-type="doi">10.1038/nn1643</pub-id><pub-id pub-id-type="pmid">16474393</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hammerstrom</surname> <given-names>D. A.</given-names></name></person-group> (<year>1990</year>). <article-title>VLSI architecture for high-performance, low-cost, on-chip learning</article-title>, in <source>1990 IJCNN International Joint Conference on Neural Networks (IEEE)</source>, p. <fpage>537</fpage>&#x02013;<lpage>544</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN.1990.137621</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hao</surname> <given-names>Y.</given-names></name> <name><surname>Xiang</surname> <given-names>S.</given-names></name> <name><surname>Han</surname> <given-names>G.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Ma</surname> <given-names>X.</given-names></name> <name><surname>Zhu</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Recent progress of integrated circuits and optoelectronic chips</article-title>. <source>Sci. China Inf. Sci.</source> <volume>64</volume>, <fpage>1</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1007/s11432-021-3235-7</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>Y.</given-names></name> <name><surname>Jiang</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Wan</surname> <given-names>C.</given-names></name> <name><surname>Shi</surname> <given-names>Y.</given-names></name> <name><surname>Wan</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Electrolyte-gated neuromorphic transistors for brain-like dynamic computing</article-title>. <source>J. Appl. Phys.</source> <volume>130</volume>, <fpage>190904</fpage>. <pub-id pub-id-type="doi">10.1063/5.0069456</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hermeline</surname> <given-names>F.</given-names></name></person-group> (<year>2007</year>). <article-title>Approximation of 2-D and 3-D diffusion operators with variable full tensor coefficients on arbitrary meshes</article-title>. <source>Comput. Methods Appl. Mech. Eng.</source> <volume>196</volume>, <fpage>2497</fpage>&#x02013;<lpage>2526</lpage>. <pub-id pub-id-type="doi">10.1016/j.cma.2007.01.005</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hodges</surname> <given-names>A.</given-names></name> <name><surname>Turing</surname> <given-names>A.</given-names></name></person-group> (<year>1992</year>). <source>The Enigma</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Vintage</publisher-name>.</citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Yan</surname> <given-names>J. A.</given-names></name></person-group> (<year>2022</year>). <article-title>Survey of quantum computing hybrid applications with brain-computer interface</article-title>. <source>Cognit. Robot.</source> <volume>2</volume>, <fpage>164</fpage>&#x02013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogr.2022.07.002</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Brain-like computing, comsputing now [J/OL]</article-title>. <source>IEEE Comput Society.</source> <fpage>9</fpage>.</citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Qiao</surname> <given-names>X.</given-names></name> <name><surname>Dustdar</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>Toward decentralized and collaborative deep learning inference for intelligent iot devices</article-title>. <source>IEEE Network</source> <volume>36</volume>, <fpage>59</fpage>&#x02013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1109/MNET.011.2000639</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Simple model of spiking neurons</article-title>. <source>IEEE Trans Neural Netw.</source> <volume>14</volume>, <fpage>1569</fpage>&#x02013;<lpage>1572</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2003.820440</pub-id><pub-id pub-id-type="pmid">18244602</pub-id></citation></ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izhikevich</surname> <given-names>E. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Which model to use for cortical spiking neurons?</article-title> <source>IEEE Trans. Neural Netw.</source> <volume>15</volume>, <fpage>1063</fpage>&#x02013;<lpage>1070</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2004.832719</pub-id><pub-id pub-id-type="pmid">15484883</pub-id></citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jordan</surname> <given-names>J.</given-names></name> <name><surname>Petrovici</surname> <given-names>M. A.</given-names></name> <name><surname>Breitwieser</surname> <given-names>O.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Deterministic networks for probabilistic computing</article-title>. <source>Sci. Rep.</source> <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-54137-7</pub-id><pub-id pub-id-type="pmid">31797943</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kasabov</surname> <given-names>N.</given-names></name> <name><surname>Scott</surname> <given-names>N. M.</given-names></name> <name><surname>Tu</surname> <given-names>E.</given-names></name> <name><surname>Marks</surname> <given-names>S.</given-names></name> <name><surname>Sengupta</surname> <given-names>N.</given-names></name> <name><surname>Capecci</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: design methodology and selected applications</article-title>. <source>Neural Netw.</source> <volume>78</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2015.09.011</pub-id><pub-id pub-id-type="pmid">26576468</pub-id></citation></ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kimura</surname> <given-names>M.</given-names></name> <name><surname>Shibayama</surname> <given-names>Y.</given-names></name> <name><surname>Nakashima</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Neuromorphic chip integrated with a large-scale integration circuit and amorphous-metal-oxide semiconductor thin-film synapse devices</article-title>. <source>Sci. Rep.</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-022-09443-y</pub-id><pub-id pub-id-type="pmid">35354900</pub-id></citation></ref>
<ref id="B73">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kiyoyama</surname> <given-names>K.</given-names></name> <name><surname>Horio</surname> <given-names>Y.</given-names></name> <name><surname>Fukushima</surname> <given-names>T.</given-names></name> <name><surname>Hashimoto</surname> <given-names>H.</given-names></name> <name><surname>Orima</surname> <given-names>T.</given-names></name> <name><surname>Koyanagi</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Design for 3-D Stacked Neural Network Circuit with Cyclic Analog Computing</article-title>, in <source>2021 IEEE International 3D Systems Integration Conference (3DIC)</source> (<publisher-loc>IEEE</publisher-loc>), p. <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/3DIC52383.2021.9687608</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kotchetkov</surname> <given-names>I. S.</given-names></name> <name><surname>Hwang</surname> <given-names>B. Y.</given-names></name> <name><surname>Appelboom</surname> <given-names>G.</given-names></name> <name><surname>Kellner</surname> <given-names>C. P.</given-names></name> <name><surname>Connolly</surname> <given-names>E. S.</given-names></name></person-group> (<year>2010</year>). <article-title>Brain-computer interfaces: military, neurosurgical, and ethical perspective</article-title>. <source>Neurosurg. Focus</source>, 28, E25. <pub-id pub-id-type="doi">10.3171/2010.2.FOCUS1027</pub-id><pub-id pub-id-type="pmid">20568942</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koutrouvelis</surname> <given-names>I. A.</given-names></name> <name><surname>Canavos</surname> <given-names>G. C.</given-names></name></person-group> (<year>1999</year>). <article-title>Estimation in the Pearson type 3 distribution</article-title>. <source>Water Resour. Res.</source> <volume>35</volume>, <fpage>2693</fpage>&#x02013;<lpage>2704</lpage>. <pub-id pub-id-type="doi">10.1029/1999WR900174</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lapicque</surname> <given-names>L. &#x000C9;.</given-names></name></person-group> (<year>1907</year>). <article-title>Louis lapicque</article-title>. <source>J. Phys.</source> <volume>9</volume>, <fpage>620</fpage>&#x02013;<lpage>635</lpage>.</citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leutgeb</surname> <given-names>S.</given-names></name> <name><surname>Leutgeb</surname> <given-names>J. K.</given-names></name> <name><surname>Moser</surname> <given-names>M. B.</given-names></name> <name><surname>Moser</surname> <given-names>E. I.</given-names></name></person-group> (<year>2005</year>). <article-title>Place cells, spatial maps and the population code for memory</article-title>. <source>Current Opinion Neurobiol.</source> <volume>15</volume>, <fpage>738</fpage>&#x02013;<lpage>746</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2005.10.002</pub-id><pub-id pub-id-type="pmid">16263261</pub-id></citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Yin</surname> <given-names>H.</given-names></name> <name><surname>Fang</surname> <given-names>X.</given-names></name> <name><surname>Lu</surname> <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>Lossless image compression algorithm and hardware architecture for bandwidth reduction of external memory</article-title>. <source>IET Image Process.</source> <volume>11</volume>, <fpage>376</fpage>&#x02013;<lpage>388</lpage>. <pub-id pub-id-type="doi">10.1049/iet-ipr.2016.0636</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>F.</given-names></name> <name><surname>Zhao</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Yang</surname> <given-names>T.</given-names></name> <name><surname>Jiang</surname> <given-names>L. S.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>STDP: Supervised spike timing dependent plasticity for efficient spiking neural network training</article-title>. <source>Front. Neurosci.</source> <volume>15</volume>, <fpage>756876</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2021.756876</pub-id><pub-id pub-id-type="pmid">34803591</pub-id></citation></ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Zou</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>Q.</given-names></name> <name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Collaborative edge computing with FPGA-based CNN accelerators for energy-efficient and time-aware face tracking system</article-title>. <source>IEEE Trans. Comput. Social Syst.</source> <volume>9</volume>, <fpage>252</fpage>&#x02013;<lpage>266</lpage>. <pub-id pub-id-type="doi">10.1109/TCSS.2021.3059318</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>D.</given-names></name> <name><surname>Shen</surname> <given-names>J.</given-names></name> <name><surname>Gu</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Zhu</surname> <given-names>X.</given-names></name> <name><surname>Xu</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Darwin: a neuromorphic hardware co-processor based on spiking neural networks</article-title>. <source>J. Syst. Arch.</source> <volume>77</volume>, <fpage>43</fpage>&#x02013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1016/j.sysarc.2017.01.003</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maass</surname> <given-names>W.</given-names></name> <name><surname>Natschlager</surname> <given-names>T.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>2002</year>). <article-title>Real-time computing without stable states: a new framework for neural computation based on perturbations</article-title>. <source>Neural Comput.</source> <volume>14</volume>, <fpage>2531</fpage>&#x02013;<lpage>2560</lpage>. <pub-id pub-id-type="doi">10.1162/089976602760407955</pub-id><pub-id pub-id-type="pmid">12433288</pub-id></citation></ref>
<ref id="B83">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mahowald</surname> <given-names>M.</given-names></name></person-group> (<year>1994</year>). <source>Ananalog VLSI System for Stereoscopic Vision.</source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer Science and Business Media</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4615-2724-4</pub-id><pub-id pub-id-type="pmid">18272330</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mao</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Xia</surname> <given-names>Z.</given-names></name> <name><surname>Xiao</surname> <given-names>J.</given-names></name> <name><surname>Zhu</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>An ultra-energy-efficient and high accuracy ECG classification processor with SNN inference assisted by on-chip, ANN learning</article-title>. <source>IEEE Trans. Biomed. Circuits Syst</source>. <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1109/TBCAS.2022.3185720</pub-id><pub-id pub-id-type="pmid">35737625</pub-id></citation></ref>
<ref id="B85">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>A. J.</given-names></name></person-group> (<year>1989</year>). <source>Programming in VLSI: From Communicating Processes to Delay-Insensitive Circuits</source>. <publisher-loc>Pasadena</publisher-loc>: <publisher-name>California Institute of Technology Pasadena Department of Computer Science</publisher-name>.</citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>A. J.</given-names></name> <name><surname>Nystrom</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Asynchronous techniques for system-on-chip design</article-title>. <source>Proc. IEEE</source> <volume>94</volume>, <fpage>1089</fpage>&#x02013;<lpage>1120</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2006.875789</pub-id><pub-id pub-id-type="pmid">16041988</pub-id></citation></ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCarthy</surname> <given-names>J.</given-names></name> <name><surname>Minsky</surname> <given-names>M. L.</given-names></name> <name><surname>Rochester</surname> <given-names>N.</given-names></name> <name><surname>Shannon</surname> <given-names>C. E.</given-names></name></person-group> (<year>2006</year>). <article-title>A proposal for the Dartmouth summer research project on artifificial intelligence</article-title>. <source>AI Magazine</source>. <volume>27</volume>, <fpage>12</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1609/aimag.v27i4.1904</pub-id></citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McKennoch</surname> <given-names>S.</given-names></name> <name><surname>Voegtlin</surname> <given-names>T.</given-names></name> <name><surname>Bushnell</surname> <given-names>L.</given-names></name></person-group> (<year>2009</year>). <article-title>Spike-timing error backpropagation in theta neuron networks</article-title>. <source>Neural Comput.</source> <volume>21</volume>, <fpage>9</fpage>&#x02013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1162/neco.2009.09-07-610</pub-id><pub-id pub-id-type="pmid">19431278</pub-id></citation></ref>
<ref id="B89">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mead</surname> <given-names>C. A.</given-names></name></person-group> (<year>1989</year>). <source>Analog VLSI and Neural Systems</source>. <publisher-loc>Reading, MA</publisher-loc>: <publisher-name>Addison-Wesley</publisher-name>.</citation>
</ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Merolla</surname> <given-names>P.</given-names></name> <name><surname>Boahen</surname> <given-names>K. A.</given-names></name></person-group> (<year>2003</year>). <source>A Recurrent Model of Orientation Maps with Simple and Complex Cells</source>.</citation>
</ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Migliore</surname> <given-names>R.</given-names></name> <name><surname>Lupascu</surname> <given-names>C. A.</given-names></name> <name><surname>Bologna</surname> <given-names>L. L.</given-names></name> <name><surname>Romani</surname> <given-names>A.</given-names></name> <name><surname>Courcol</surname> <given-names>J-. D.</given-names></name> <name><surname>Antonel</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>The physiological variability of channel density in hippocampal CA1 pyramidal cells and interneurons explored using a unified data-driven modeling workflow</article-title>. <source>PLoS Comput. Biol.</source> <volume>14</volume>, <fpage>e1006423</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006423</pub-id><pub-id pub-id-type="pmid">30222740</pub-id></citation></ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miko&#x00142;ajewska</surname> <given-names>E.</given-names></name> <name><surname>Miko&#x00142;ajewski</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Non-invasive EEG- based brain-computer interfaces in patients with disorders of consciousness</article-title>. <source>Military Med. Res.</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1186/2054-9369-1-14</pub-id><pub-id pub-id-type="pmid">26056608</pub-id></citation></ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohemmed</surname> <given-names>A.</given-names></name> <name><surname>Schliebs</surname> <given-names>S.</given-names></name> <name><surname>Matsuda</surname> <given-names>S.</given-names></name> <name><surname>Kasabov</surname> <given-names>N.</given-names></name></person-group> (<year>2013</year>). <article-title>Training spiking neural networks to associate spatio-temporal input&#x02013;output spike patterns</article-title>. <source>Neurocomputing</source> <volume>107</volume>, <fpage>3</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2012.08.034</pub-id></citation>
</ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mudgal</surname> <given-names>S. K.</given-names></name> <name><surname>Sharma</surname> <given-names>S. K.</given-names></name> <name><surname>Chaturvedi</surname> <given-names>J.</given-names></name> <name><surname>Sharma</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Brain computer interface advancement in neurosciences: applications and issues</article-title>. <source>Interdisciplin. Neurosurg. Adv. Tech. Case Manage.</source> <volume>20</volume>, <fpage>100694</fpage>. <pub-id pub-id-type="doi">10.1016/j.inat.2020.100694</pub-id><pub-id pub-id-type="pmid">34386243</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nejad</surname> <given-names>M. B.</given-names></name></person-group> (<year>2020</year>). <article-title>Parametric evaluation of routing algorithms in network on chip architecture</article-title>. <source>Comput. Syst. Sci. Eng.</source> <volume>35</volume>, <fpage>367</fpage>&#x02013;<lpage>375</lpage>. <pub-id pub-id-type="doi">10.32604/csse.2020.35.367</pub-id></citation>
</ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neuman</surname> <given-names>J. V.</given-names></name></person-group> (<year>1958</year>). <article-title>The computer and the brain</article-title>. <source>Annals History Comput.</source> <volume>11</volume>, <fpage>161</fpage>&#x02013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1109/MAHC.1989.10032</pub-id></citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olaronke</surname> <given-names>I.</given-names></name> <name><surname>Rhoda</surname> <given-names>I.</given-names></name> <name><surname>Gambo</surname> <given-names>I.</given-names></name> <name><surname>Oluwaseun</surname> <given-names>O.</given-names></name> <name><surname>Janet</surname> <given-names>O.</given-names></name></person-group> (<year>2018</year>). <article-title>Prospects and problems of brain computer interface in healthcare</article-title>. <source>Current J. Appl. Sci. Technol.</source> <volume>23</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.9734/CJAST/2018/44358</pub-id></citation>
</ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panzeri</surname> <given-names>S.</given-names></name> <name><surname>Brunel</surname> <given-names>N.</given-names></name> <name><surname>Logothetis</surname> <given-names>N. K.</given-names></name> <name><surname>Kayser</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Sensory neural codes using multiplexed temporal scales</article-title>. <source>Trends Neurosci.</source> <volume>33</volume>, <fpage>111</fpage>&#x02013;<lpage>120</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2009.12.001</pub-id><pub-id pub-id-type="pmid">20045201</pub-id></citation></ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paton</surname> <given-names>L. W.</given-names></name> <name><surname>Tiffin</surname> <given-names>P. A.</given-names></name></person-group> (<year>2022</year>). <article-title>Technology matters: machine learning approaches to personalised child and adolescent mental health care</article-title>. <source>Child Adolescent Mental Health</source> <volume>27</volume>, <fpage>307</fpage>&#x02013;<lpage>308</lpage>. <pub-id pub-id-type="doi">10.1111/camh.12546</pub-id><pub-id pub-id-type="pmid">35218142</pub-id></citation></ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pei</surname> <given-names>J.</given-names></name> <name><surname>Deng</surname> <given-names>L.</given-names></name> <name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Zhao</surname> <given-names>M.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Towards artificial general intelligence with hybrid Tianjicc chip architecture</article-title>. <source>Nature</source> <volume>572</volume>, <fpage>106</fpage>&#x02013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-1424-8</pub-id><pub-id pub-id-type="pmid">31367028</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pfeil</surname> <given-names>T.</given-names></name> <name><surname>Jordan</surname> <given-names>J.</given-names></name> <name><surname>Tetzlaff</surname> <given-names>T.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Diesmann</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study</article-title>, in <source>11th G&#x000F6;ttingen Meeting of the German Neuroscience Society. Computational and Systems Neuroscience</source> (<publisher-loc>FZJ-2015-05833</publisher-loc>). <pub-id pub-id-type="doi">10.1103/PhysRevX.6.021023</pub-id></citation>
</ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prezioso</surname> <given-names>M.</given-names></name> <name><surname>Merrikh-Bayat</surname> <given-names>F.</given-names></name> <name><surname>Hoskins</surname> <given-names>B. D.</given-names></name> <name><surname>Adam</surname> <given-names>G. C.</given-names></name> <name><surname>Likharev</surname> <given-names>K. K.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B</given-names></name></person-group> (<year>2015</year>). <article-title>Training and operation of an integrated neuromorphic network based on metal-oxide memristors</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>61</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1038/nature14441</pub-id><pub-id pub-id-type="pmid">25951284</pub-id></citation></ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramakuri</surname> <given-names>S. K.</given-names></name> <name><surname>Chithaluru</surname> <given-names>P.</given-names></name> <name><surname>Kumar</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Eyeblink robot control using brain-computer interface for healthcare applications</article-title>. <source>Int. J. Mobile Dev. Wearable Technol. Flexible Electron.</source> <volume>10</volume>, <fpage>38</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.4018/IJMDWTFE.2019070103</pub-id></citation>
</ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rizza</surname> <given-names>M. F.</given-names></name> <name><surname>Locatelli</surname> <given-names>F.</given-names></name> <name><surname>Masoli</surname> <given-names>S.</given-names></name> <name><surname>S&#x000E1;nchez-Ponce</surname> <given-names>D.</given-names></name> <name><surname>Mu&#x000F1;oz</surname> <given-names>A.</given-names></name> <name><surname>Prestori</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Stellate cell computational modeling predicts signal filtering in the molecular layer circuit of cerebellum</article-title>. <source>Sci Rep</source>. <volume>11</volume>, <fpage>3873</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-021-83209-w</pub-id><pub-id pub-id-type="pmid">33594118</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Russo</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Quantale modules and their operators, with applications</article-title>. <source>J Logic Comput.</source> <volume>20</volume>, <fpage>917</fpage>&#x02013;<lpage>946</lpage>. <pub-id pub-id-type="doi">10.1093/logcom/exn088</pub-id></citation>
</ref>
<ref id="B106">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Safiullina</surname> <given-names>A. N.</given-names></name></person-group> (<year>2016</year>). <article-title>Estimation of the binominal distribution parameters using the method of moments and its asymptotic properties</article-title>. <source>U&#x0010D;&#x000EB;nye Zapiski Kazanskogo Universiteta: Seri&#x000E2; Fiziko-Matemati&#x0010D;eskie Nauki</source> <volume>158</volume>, <fpage>221</fpage>&#x02013;<lpage>230</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&#x00026;jrnid=uzku&#x00026;paperid=1364&#x00026;option_lang=eng">http://www.mathnet.ru/php/archive.phtml?wshow=paper&#x00026;jrnid=uzku&#x00026;paperid=1364&#x00026;option_lang=eng</ext-link></citation>
</ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Samonds</surname> <given-names>J. M.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name> <name><surname>Bernard</surname> <given-names>M. R.</given-names></name> <name><surname>Bonds</surname> <given-names>A. B.</given-names></name></person-group> (<year>2006</year>). <article-title>Synchronous activity in cat visual cortex encodes collinear and cocircular contours</article-title>. <source>J. Neurophysiol.</source> <volume>95</volume>, <fpage>2602</fpage>&#x02013;<lpage>2616</lpage>. <pub-id pub-id-type="doi">10.1152/jn.01070.2005</pub-id><pub-id pub-id-type="pmid">16354730</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Billaudelle</surname> <given-names>S.</given-names></name> <name><surname>Dauer</surname> <given-names>P.</given-names></name> <name><surname>Weis</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <source>Accelerated Analog Neuromorphic Computing</source>. arXiv preprint arXiv:11996, 2020. <pub-id pub-id-type="doi">10.1007/978-3-030-91741-8_6</pub-id></citation>
</ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>Hock</surname> <given-names>M.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Millner</surname> <given-names>S. A.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>Wafer-scale neuromorphic hardware system for large-scale neural modeling</article-title>, in <source>2010 IEEE International Symposium on Circuits, and Systems ISCAS (IEEE</source>), <fpage>1947</fpage>&#x02013;<lpage>1950</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2010.5536970</pub-id></citation>
</ref>
<ref id="B110">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schemmel</surname> <given-names>J.</given-names></name> <name><surname>Bruderle</surname> <given-names>D.</given-names></name> <name><surname>Meier</surname> <given-names>K.</given-names></name> <name><surname>Ostendorf</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>Modeling synaptic plasticity within networks of highly accelerated Iand Fneurons</article-title>, in <source>2007 IEEE International Symposium on Circuits and Systems (IEEE)</source>, p. <fpage>3367</fpage>&#x02013;<lpage>3370</lpage>. <pub-id pub-id-type="doi">10.1109/ISCAS.2007.378289</pub-id></citation>
</ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scholze</surname> <given-names>S.</given-names></name> <name><surname>Eisenreich</surname> <given-names>H.</given-names></name> <name><surname>H&#x000F6;ppner</surname> <given-names>S.</given-names></name> <name><surname>Ellguth</surname> <given-names>G.</given-names></name> <name><surname>Henker</surname> <given-names>S.</given-names></name> <name><surname>Ander</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>A 32 GBit/s communication SoC for a waferscale neuromorphic system</article-title>. <source>Integration</source> <volume>45</volume>, <fpage>61</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1016/j.vlsi.2011.05.003</pub-id></citation>
</ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sengupta</surname> <given-names>P.</given-names></name> <name><surname>Stalin John</surname> <given-names>M. R.</given-names></name> <name><surname>Sridhar</surname> <given-names>S. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Classification of conscious, semi-conscious and minimally conscious state for medical assisting system using brain computer, interface and deep neural network</article-title>. <source>J. Med. Robot. Res.</source> <pub-id pub-id-type="doi">10.1142/S2424905X19420042</pub-id></citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Service</surname> <given-names>R. F.</given-names></name></person-group> (<year>2014</year>). <source>The Brain Chip</source>. <pub-id pub-id-type="doi">10.1126/science.345.6197.614</pub-id><pub-id pub-id-type="pmid">25104367</pub-id></citation></ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shanker</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>The discrete poisson-akash distribution</article-title>. <source>Int. J. Probab. Stat.</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.5336/biostatic.2017-54834</pub-id></citation>
</ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname> <given-names>Y.</given-names></name> <name><surname>Mizumoto</surname> <given-names>M.</given-names></name> <name><surname>Yubazaki</surname> <given-names>N.</given-names></name> <name><surname>Otani</surname> <given-names>M.</given-names></name></person-group> (<year>1996</year>). <article-title>A learning algorithm for tuning fuzzy rules based on the gradient descent method</article-title>, in <source>Proceedings of IEEE 5th International Fuzzy Systems</source>, Vol. 1 (IEEE), p. <fpage>55</fpage>&#x02013;<lpage>61</lpage>.</citation>
</ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shiotani</surname> <given-names>M.</given-names></name> <name><surname>Nagano</surname> <given-names>N.</given-names></name> <name><surname>Ishikawa</surname> <given-names>A.</given-names></name> <name><surname>Sakai</surname> <given-names>K.</given-names></name> <name><surname>Yoshinaga</surname> <given-names>T.</given-names></name> <name><surname>Kato</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Challenges in detection of premonitory electroencephalographic (EEG) changes of drug-induced seizure using a non-human primate EEG telemetry model</article-title>. <source>J. Pharmacol. Toxicol. Methods</source> <volume>81</volume>, <fpage>337</fpage>. <pub-id pub-id-type="doi">10.1016/j.vascn.2016.02.010</pub-id></citation>
</ref>
<ref id="B117">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sivilotti</surname> <given-names>M. A.</given-names></name></person-group> (<year>1991</year>). <source>Wiring Considerations in Analog VLSI Systems, with Application to Field-Programmable Networks</source>. <publisher-loc>California</publisher-loc>: <publisher-name>California Institute of Technology</publisher-name>.</citation>
</ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sivilotti</surname> <given-names>M. A.</given-names></name> <name><surname>Emerling</surname> <given-names>M.</given-names></name> <name><surname>Mead</surname> <given-names>C.</given-names></name></person-group> (<year>1990</year>). <source>A Novel Associative Memory Implemented Using Collective Computation</source>.</citation>
</ref>
<ref id="B119">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sporea</surname> <given-names>I.</given-names></name> <name><surname>Gruning</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Supervised learning in multilayer spiking neural networks</article-title>. <source>Neural Comput.</source> <volume>25</volume>, <fpage>473</fpage>&#x02013;<lpage>509</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00396</pub-id><pub-id pub-id-type="pmid">23148411</pub-id></citation></ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stankovic</surname> <given-names>V. V.</given-names></name> <name><surname>Milenkovic</surname> <given-names>N. Z.</given-names></name></person-group> (<year>2015</year>). <article-title>Synchronization algorithm for predictors for SDRAM memories</article-title>. <source>J. Supercomput.</source> <volume>71</volume>, <fpage>3609</fpage>&#x02013;<lpage>3636</lpage>. <pub-id pub-id-type="doi">10.1007/s11227-015-1452-6</pub-id></citation>
</ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Syed</surname> <given-names>T.</given-names></name> <name><surname>Kakani</surname> <given-names>V.</given-names></name> <name><surname>Cui</surname> <given-names>X.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>Exploring optimized spiking neural network architectures for classification tasks on embedded platforms</article-title>. <source>Sensors</source> <volume>21</volume>, <fpage>32</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.3390/s21093240</pub-id><pub-id pub-id-type="pmid">34067080</pub-id></citation></ref>
<ref id="B122">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M.</given-names></name> <name><surname>Pawelzik</surname> <given-names>K.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>1998</year>). <article-title>Neural networks with dynamic synapses</article-title>. <source>Neural Comput.</source> <volume>10</volume>, <fpage>821</fpage>&#x02013;<lpage>835</lpage>. <pub-id pub-id-type="doi">10.1162/089976698300017502</pub-id><pub-id pub-id-type="pmid">9573407</pub-id></citation></ref>
<ref id="B123">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M. V.</given-names></name> <name><surname>Markram</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability</article-title>. <source>Proc. Natl. Acad. Sci.</source> <volume>94</volume>, <fpage>719</fpage>&#x02013;<lpage>723</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.2.719</pub-id><pub-id pub-id-type="pmid">9012851</pub-id></citation></ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Valadez-God&#x000ED;nez</surname> <given-names>S.</given-names></name> <name><surname>Sossa</surname> <given-names>H.</given-names></name> <name><surname>Santiago-Montero</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>On the accuracy and computational cost of spiking neuron implementation</article-title>. <source>Neural Netw.</source> <volume>122</volume>, <fpage>196</fpage>&#x02013;<lpage>217</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2019.09.026</pub-id><pub-id pub-id-type="pmid">31689679</pub-id></citation></ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Rossum</surname> <given-names>M. C. W.</given-names></name></person-group> (<year>2001</year>). <article-title>A novel spike distance</article-title>. <source>Neural Comput.</source> <volume>13</volume>, <fpage>751</fpage>&#x02013;<lpage>763</lpage>. <pub-id pub-id-type="doi">10.1162/089976601300014321</pub-id><pub-id pub-id-type="pmid">11255567</pub-id></citation></ref>
<ref id="B126">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Victor</surname> <given-names>J. D.</given-names></name> <name><surname>Purpura</surname> <given-names>K. P.</given-names></name></person-group> (<year>1997</year>). <article-title>Metric-space analysis of spike trains: theory, algorithms and application</article-title>. <source>Network Comput. Neural Syst.</source> <volume>8</volume>, <fpage>127</fpage>&#x02013;<lpage>164</lpage>. <pub-id pub-id-type="doi">10.1088/0954-898X_8_2_003</pub-id></citation>
</ref>
<ref id="B127">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Voutsas</surname> <given-names>K.</given-names></name> <name><surname>Langner</surname> <given-names>G.</given-names></name> <name><surname>Adamy</surname> <given-names>J.</given-names></name> <name><surname>Ochse</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>A brain-like neural network for periodicity analysis</article-title>. <source>IEEE Trans. Syst. Man Cybern. Part B</source> <volume>35</volume>, <fpage>12</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCB.2004.837751</pub-id><pub-id pub-id-type="pmid">15719929</pub-id></citation></ref>
<ref id="B128">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Q. N.</given-names></name> <name><surname>Zhao</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Van Zalinge</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>Y. N.</given-names></name> <name><surname>Yang</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>All-solid-state ion doping synaptic transistor for bionic neural computing,&#x0201D;</article-title> in <source>2021 International Conference on IC Design and Technology (ICICDT) (IEEE)</source>, p. <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/ICICDT51558.2021.9626468</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B129">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y. C.</given-names></name> <name><surname>Hua</surname> <given-names>H. U.</given-names></name></person-group> (<year>2016</year>). <article-title>New development of artificial cognitive computation: true north neuron chip</article-title>. <source>Comput. Sci</source>. <volume>43</volume>, <fpage>17</fpage>&#x02013;<lpage>20</lpage>&#x0002B;24. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.cnki.com.cn/Article/CJFDTotal-JSJA2016S1004.htm">https://www.cnki.com.cn/Article/CJFDTotal-JSJA2016S1004.htm</ext-link></citation>
</ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Zeng</surname> <given-names>X.</given-names></name> <name><surname>Zhong</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>A new supervised learning algorithm for spiking neurons</article-title>. <source>Neural Comput.</source> <volume>25</volume>, <fpage>1472</fpage>&#x02013;<lpage>1511</lpage>. <pub-id pub-id-type="doi">10.1162/NECO_a_00450</pub-id><pub-id pub-id-type="pmid">23517101</pub-id></citation></ref>
<ref id="B131">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>H.</given-names></name> <name><surname>Yuan</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Zhao</surname> <given-names>G.</given-names></name> <name><surname>Sun</surname> <given-names>Z.</given-names></name> <name><surname>Yao</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>BrainIoT: brain-like productive services provisioning with federated learning in industrial IoT</article-title>. <source>IEEE Internet Things J.</source> <volume>9</volume>, <fpage>2014</fpage>&#x02013;<lpage>2024</lpage>. <pub-id pub-id-type="doi">10.1109/JIOT.2021.3089334</pub-id></citation>
</ref>
<ref id="B132">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>R.</given-names></name> <name><surname>Guan</surname> <given-names>X.</given-names></name> <name><surname>Hassan</surname> <given-names>M. M.</given-names></name> <name><surname>Almogren</surname> <given-names>A.</given-names></name> <name><surname>Alsanad</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>AI-enabled emotion-aware robot: the fusion of smart clothing, edge clouds and robotics</article-title>. <source>Future Gener. Comput. Syst.</source> <volume>102</volume>, <fpage>701</fpage>&#x02013;<lpage>709</lpage>. <pub-id pub-id-type="doi">10.1016/j.future.2019.09.029</pub-id></citation>
</ref>
<ref id="B133">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J. J.</given-names></name> <name><surname>Strukov</surname> <given-names>D. B.</given-names></name> <name><surname>Stewart</surname> <given-names>D. R.</given-names></name></person-group> (<year>2013</year>). <article-title>Memristive devices for computing</article-title>. <source>Nat. Nanotechnol.</source> <volume>8</volume>, <fpage>13</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1038/nnano.2012.240</pub-id><pub-id pub-id-type="pmid">23269430</pub-id></citation></ref>
<ref id="B134">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Z.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Zhu</surname> <given-names>J.</given-names></name> <name><surname>Ye</surname> <given-names>T. T.</given-names></name></person-group> (<year>2020</year>). <article-title>Analog circuit implementation of LIF, and, STDP models for spiking neural networks</article-title>, in <source>Proceedings of the 2020 on Great Lakes Symposium on VLSI</source>, p. <fpage>469</fpage>&#x02013;<lpage>474</lpage>. <pub-id pub-id-type="doi">10.1145/3386263.3406940</pub-id></citation>
</ref>
<ref id="B135">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yasunaga</surname> <given-names>M.</given-names></name> <name><surname>Masuda</surname> <given-names>N.</given-names></name> <name><surname>Yagyu</surname> <given-names>M.</given-names></name> <name><surname>Asai</surname> <given-names>M.</given-names></name> <name><surname>Yamada</surname> <given-names>M.</given-names></name> <name><surname>Masaki</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>1990</year>). <article-title>Design, fabrication and evaluation of a 5-inch wafer scale neural network LSI composed on 576 digital neurons</article-title>, in <source>1990 IJCNN International Joint Conference on Neural Networks</source> (<publisher-loc>IEEE</publisher-loc>), p. <fpage>527</fpage>&#x02013;<lpage>535</lpage>. <pub-id pub-id-type="doi">10.1109/IJCNN.1990.137618</pub-id></citation>
</ref>
<ref id="B136">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yon Neumann</surname> <given-names>J.</given-names></name></person-group> (<year>1958</year>). <source>The Computer and the Brain</source>. <publisher-loc>New Haven</publisher-loc>: <publisher-name>Yale Unit</publisher-name>.</citation>
</ref>
<ref id="B137">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>Q.</given-names></name> <name><surname>Tang</surname> <given-names>H.</given-names></name> <name><surname>Tan</surname> <given-names>K. C.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Rapid feedforward computation by temporal encoding and learning with spiking neurons</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>24</volume>, <fpage>1539</fpage>&#x02013;<lpage>1552</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2013.2245677</pub-id><pub-id pub-id-type="pmid">24808592</pub-id></citation></ref>
<ref id="B138">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>Q. I.</given-names></name> <name><surname>Hong-bing</surname> <given-names>P. A.</given-names></name> <name><surname>Shu-zhuan</surname> <given-names>H. E.</given-names></name> <name><surname>Li</surname> <given-names>L. I.</given-names></name> <name><surname>Wei</surname> <given-names>L. I.</given-names></name> <name><surname>Feng</surname> <given-names>H. A.</given-names></name></person-group> (<year>2014</year>). <article-title>Parallelization of NCS and algorithm based on heterogeneous multi-core prototype chip</article-title>. <source>Microelectron. Comput.</source> <volume>31</volume>, <fpage>87</fpage>&#x02013;<lpage>91</lpage>.</citation>
</ref>
<ref id="B139">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Shi</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>R.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Spectrum-dependent photonic synapses based on 2D imine polymers for power-efficient neuromorphic computing</article-title>. <source>InfoMat</source> <volume>3</volume>, <fpage>904</fpage>&#x02013;<lpage>916</lpage>. <pub-id pub-id-type="doi">10.1002/inf2.12198</pub-id></citation>
</ref>
<ref id="B140">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhenghao</surname> <given-names>Z. H.</given-names></name> <name><surname>Zhilei</surname> <given-names>C. H.</given-names></name> <name><surname>Xia</surname> <given-names>H. U.</given-names></name> <name><surname>Cong</surname> <given-names>X. U.</given-names></name></person-group> (<year>2022</year>). <article-title>Design and implementation of NEST brain-like simulator based on heterogeneous computing platform</article-title>. <source>Microelectron. Comput.</source> <volume>39</volume>, <fpage>54</fpage>&#x02013;<lpage>62</lpage>.</citation>
</ref>
<ref id="B141">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>Z.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Imani</surname> <given-names>F.</given-names></name> <name><surname>Alimohamadi</surname> <given-names>H.</given-names></name> <name><surname>Cammarota</surname> <given-names>R.</given-names></name> <name><surname>Imani</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Scalable edge-based hyperdimensional learning system with brain-like neural adaptation</article-title>, in <source>Proceedings of the International Conference for High Performance Computing Networking Storage and Analysis</source> (<publisher-loc>St. Louis, MI</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1145/3458817.3480958</pub-id></citation>
</ref>
</ref-list>
</back>
</article>