<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title>Frontiers in Neuroinformatics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neuroinform.</abbrev-journal-title>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fninf.2022.1015624</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Technology and Code</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Michaelis</surname> <given-names>Carlo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/898828/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Lehr</surname> <given-names>Andrew B.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1084073/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Oed</surname> <given-names>Winfried</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1963429/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tetzlaff</surname> <given-names>Christian</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computational Neuroscience, University of G&#x000F6;ttingen</institution>, <addr-line>G&#x000F6;ttingen</addr-line>, <country>Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Bernstein Center for Computational Neuroscience, University of G&#x000F6;ttingen</institution>, <addr-line>G&#x000F6;ttingen</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Andrew P. Davison, UMR9197 Institut des Neurosciences Paris Saclay (Neuro-PSI), France</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Yao-Feng Chang, Intel, United States; Terrence C. Stewart, National Research Council Canada (NRC), Canada</p></fn>

<corresp id="c001">&#x0002A;Correspondence: Carlo Michaelis <email>carlo.michaelis&#x00040;phys.uni-goettingen.de</email></corresp>
<fn fn-type="equal" id="fn001"><p>&#x02020;These authors have contributed equally to this work</p></fn></author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>1015624</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>08</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>10</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Michaelis, Lehr, Oed and Tetzlaff.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Michaelis, Lehr, Oed and Tetzlaff</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware&#x00027;s fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi&#x00027;s computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.</p></abstract>
<kwd-group>
<kwd>neuromorphic computing</kwd>
<kwd>Loihi</kwd>
<kwd>Brian2</kwd>
<kwd>emulator</kwd>
<kwd>spiking neural network</kwd>
<kwd>open source</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="26"/>
<ref-count count="38"/>
<page-count count="13"/>
<word-count count="8215"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Neuromorphic computing offers exciting new computational structures. Decentralized units inspired by neurons are implemented in hardware (reviewed by Schuman et al., <xref ref-type="bibr" rid="B29">2017</xref>; Rajendran et al., <xref ref-type="bibr" rid="B25">2019</xref>; Young et al., <xref ref-type="bibr" rid="B38">2019</xref>). These can be connected up to one another, stimulated with inputs, and the resulting activity patterns can be read out from the chip as output. A variety of algorithms and applications have been developed in recent years, including robotic control (DeWolf et al., <xref ref-type="bibr" rid="B9">2016</xref>, <xref ref-type="bibr" rid="B8">2020</xref>; Michaelis et al., <xref ref-type="bibr" rid="B18">2020</xref>; Stagsted et al., <xref ref-type="bibr" rid="B33">2020</xref>), spiking variants of deep learning algorithms, attractor networks, nearest-neighbor or graph search algorithms (reviewed by Davies et al., <xref ref-type="bibr" rid="B5">2021</xref>). Moreover, neuromorphic hardware may provide a suitable substrate for performing large scale simulations of the brain (Furber, <xref ref-type="bibr" rid="B10">2016</xref>; Thakur et al., <xref ref-type="bibr" rid="B35">2018</xref>). Neuromorphic chips specialized for particular computational tasks can either be provided as a neuromorphic computing cluster or be integrated into existing systems, akin to graphics processing units (GPU) in modern computers (Furber et al., <xref ref-type="bibr" rid="B11">2014</xref>; Davies et al., <xref ref-type="bibr" rid="B5">2021</xref>). With the right ideas, networks of spiking units implemented in neuromorphic hardware can provide the basis for powerful and efficient computation. Nevertheless, the development of new algorithms for spiking neural networks, applicable to neuromorphic hardware, is a challenge (Gr&#x000FC;ning and Bohte, <xref ref-type="bibr" rid="B13">2014</xref>; Pfeiffer and Pfeil, <xref ref-type="bibr" rid="B23">2018</xref>; Bouvier et al., <xref ref-type="bibr" rid="B2">2019</xref>).</p>
<p>At this point, without much background knowledge of neuromorphic hardware, one can get started programming using the various software development kits available (e.g., Br&#x000FC;derle et al., <xref ref-type="bibr" rid="B3">2011</xref>; Sawada et al., <xref ref-type="bibr" rid="B28">2016</xref>; Lin et al., <xref ref-type="bibr" rid="B14">2018</xref>; Rhodes et al., <xref ref-type="bibr" rid="B26">2018</xref>; Michaelis, <xref ref-type="bibr" rid="B17">2020</xref>; M&#x000FC;ller et al., <xref ref-type="bibr" rid="B21">2020a</xref>,<xref ref-type="bibr" rid="B20">b</xref>; Spilger et al., <xref ref-type="bibr" rid="B31">2020</xref>; Rueckauer et al., <xref ref-type="bibr" rid="B27">2021</xref>). Emulators for neuromorphic hardware (Furber et al., <xref ref-type="bibr" rid="B11">2014</xref>; Petrovici et al., <xref ref-type="bibr" rid="B22">2014</xref>; Luo et al., <xref ref-type="bibr" rid="B16">2018</xref>; Valancius et al., <xref ref-type="bibr" rid="B36">2020</xref>) running on a standard computer or field programmable gate arrays (FPGA), make it possible to develop neuromorphic network architectures without even needing access to a neuromorphic chip (see e.g., NengoLoihi<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> and Dynap-SE<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>). This can speed up prototyping as the initialization of networks, i.e., distributing neurons and synapses, as well as the readout of the system&#x00027;s state variables on neuromorphic chips takes some time. At the same time emulators transparently contain the main functionalities of the hardware in code and therefore provide insights into how it works. With this understanding, algorithms can be intelligently designed and complex network structures implemented.</p>
<p>In the following, we introduce an emulator for the digital neuromorphic chip <monospace>Loihi</monospace> (Davies et al., <xref ref-type="bibr" rid="B6">2018</xref>) based on the widely used spiking neural network simulator <monospace>Brian</monospace> (Stimberg et al., <xref ref-type="bibr" rid="B34">2019</xref>). We first dissect an individual computational unit from <monospace>Loihi</monospace>. The basic building block is a spiking unit inspired by a current based leaky integrate and fire (LIF) neuron model (see Gerstner et al., <xref ref-type="bibr" rid="B12">2014</xref>). Connections between these units can be plastic, enabling the implementation of diverse on-chip learning rules. Analyzing the computational unit allows us to create an exact emulation of the <monospace>Loihi</monospace> hardware on the computer. We extend this to a spiking neural network model and demonstrate that both <monospace>Loihi</monospace> and <monospace>Brian</monospace> implementations match perfectly. This exact match means one can do prototyping directly on the computer using <monospace>Brian</monospace> only, which adds another emulator in addition to the existing simulation backend in the Nengo Loihi library. This increases both availability and simplicity of algorithm design for <monospace>Loihi</monospace>, especially for those who are already used to working with <monospace>Brian</monospace>. In particular for the computational neuroscience community, this facilitates the translation of neuroscientific models to neuromorphic hardware. Finally, we review and implement synaptic plasticity and show that while individual weights show small deviations due to stochastic rounding, the statistics of a learning rule are preserved. Our aim is to facilitate the development of neuromorphic algorithms by delivering an open source emulator package that can easily be incorporated into existing workflows. In the process we provide a solid understanding of what the hardware computes, laying the appropriate foundation to design precise algorithms from the ground up.</p>
</sec>
<sec id="s2">
<title>2. Loihi&#x00027;s computational unit and its implementation</title>
<p>Developing a <monospace>Loihi</monospace> emulator requires precise understanding of how <monospace>Loihi</monospace> works. And to understand how something works, it is useful to &#x0201C;take it apart and put it back together again&#x0201D;. While we will not physically take the <monospace>Loihi</monospace> chip apart, we can inspect the components of its computational units with &#x0201C;pen and paper&#x0201D;. Then, by implementing each component on a computer we will test that, when put back together, the parts act like we expect them to. In the following we highlight how spiking units on <monospace>Loihi</monospace> approximate a variant of the well-known LIF model using first order Euler numerical integration with integer precision. This understanding enables us to emulate <monospace>Loihi</monospace>&#x00027;s spiking units on the computer in a way that is straightforward to use and easy to understand. For a better intuition of how the various parameters on <monospace>Loihi</monospace> interact, we refer readers to our neuron design tool<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> for <monospace>Loihi</monospace>. Readers familiar with Davies et al. (<xref ref-type="bibr" rid="B6">2018</xref>) and numerical implementations of LIF neurons may prefer to skip to Section 2.3.</p>
<sec>
<title>2.1. <monospace>Loihi</monospace>&#x00027;s neuron model: A recap</title>
<p>The basic computational unit on <monospace>Loihi</monospace> is inspired by a spiking neuron (Davies et al., <xref ref-type="bibr" rid="B6">2018</xref>). <monospace>Loihi</monospace> uses a variant of the leaky integrate and fire neuron model (Gerstner et al., <xref ref-type="bibr" rid="B12">2014</xref>) (see <xref ref-type="supplementary-material" rid="SM1">Appendix 9.1</xref>). Each unit <italic>i</italic> of <monospace>Loihi</monospace> implements the dynamics of the voltage <italic>v</italic><sub><italic>i</italic></sub></p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the first term controls the voltage decay, the second term is the input to the unit, and the third term resets the voltage to zero after a spike by subtracting the threshold. A spike is generated if <inline-formula><mml:math id="M2"><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0003E;</mml:mo><mml:msubsup><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and transmitted to other units to which unit <italic>i</italic> is connected. In particular, <italic>v</italic> models the voltage across the membrane of a neuron, &#x003C4;<sub><italic>v</italic></sub> is the time constant for the voltage decay, <italic>I</italic> is an input variable, <italic>v</italic><sup><italic>th</italic></sup> is the threshold voltage to spike, and &#x003C3;(<italic>t</italic>) is the so-called spike train which is meant to indicate whether the unit spiked at time <italic>t</italic>. For each unit <italic>i</italic>, &#x003C3;<sub><italic>i</italic></sub>(<italic>t</italic>) can be written as a sum of Dirac delta distributions</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>t</italic><sub><italic>i, k</italic></sub> denotes the time of the <italic>k</italic>-th spike of unit <italic>i</italic>. Note that &#x003C3;<sub><italic>i</italic></sub> is not a function, but instead defines a <italic>distribution</italic> (i.e., <italic>generalized function</italic>), and is only meaningful under an integral sign. It is to be understood as the linear functional <inline-formula><mml:math id="M4"><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x0222B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munder><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> for arbitrary, everywhere-defined function <italic>f</italic> (see Corollary 1 in <xref ref-type="supplementary-material" rid="SM1">Appendix 9.1.2</xref>).</p>
<p>Input to a unit can come from user defined external stimulation or from other units implemented on chip. Davies et al. (<xref ref-type="bibr" rid="B6">2018</xref>) describe the behavior of the input <italic>I</italic>(<italic>t</italic>) with</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub><mml:mo>*</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">bias</mml:mtext></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>J</italic><sub><italic>ij</italic></sub> is the weight from unit <italic>j</italic> to <italic>i</italic>, <inline-formula><mml:math id="M6"><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">bias</mml:mtext></mml:mstyle></mml:mrow></mml:msubsup></mml:math></inline-formula> is a constant bias input, and the spike train &#x003C3;<sub><italic>j</italic></sub> of unit <italic>j</italic> is convolved with the synaptic filter impulse response &#x003B1;<sub><italic>I</italic></sub>, given by</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4;<sub><italic>I</italic></sub> is the time constant of the synaptic response and <italic>H</italic>(<italic>t</italic>) the unit step function. Note that &#x003B1;<sub><italic>I</italic></sub>(<italic>t</italic>) is defined differently here than in Davies et al. (<xref ref-type="bibr" rid="B6">2018</xref>) (see <xref ref-type="supplementary-material" rid="SM1">Appendix 9.1.3</xref> for details). The convolution from Equation (3) is a notational convenience for defining the synaptic input induced by an incoming spike train, simply summing over the time-shifted synaptic response functions, namely <inline-formula><mml:math id="M8"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>*</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mover accent="true"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munder><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, where &#x003C4;<sub><italic>t</italic></sub><italic>f</italic>(<italic>x</italic>) &#x0003D; <italic>f</italic>(<italic>x</italic>&#x02212;<italic>t</italic>) and <inline-formula><mml:math id="M9"><mml:mover accent="true"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> (see <xref ref-type="supplementary-material" rid="SM1">Appendix 9.1.2</xref>).</p>
</sec>
<sec>
<title>2.2. Implementing <monospace>Loihi</monospace>&#x00027;s spiking unit in software</title>
<p>From the theoretical model on which <monospace>Loihi</monospace> is based, we can derive the set of operations each unit implements with a few simple steps. Using a first order approximation for the differential equations gives the update equations for the voltage and synaptic input described in the <monospace>Loihi</monospace> documentation.<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> Combined with a few other details regarding <monospace>Loihi</monospace>&#x00027;s integer precision and the order of operations, we will have all we need to implement a <monospace>Loihi</monospace> spiking unit in software.</p>

<sec>
<title>2.2.1. Synaptic input</title>
<p>From Equation (3), we see that the synaptic input can be written as a sum of exponentially decaying functions with amplitude <italic>J</italic><sub><italic>ij</italic></sub> beginning at the time of each spike <italic>t</italic><sub><italic>j, k</italic></sub> (see <xref ref-type="supplementary-material" rid="SM1">Appendix 9.1.2</xref>). In particular we have</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">bias</mml:mtext></mml:mrow></mml:msubsup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>To understand the behavior of the synaptic input it is helpful to consider the effect of one spike arriving at a single synapse. Simplifying Equation (5) to just one neuron that receives just one input spike at time <italic>t</italic><sub>1</sub> &#x0003D; 0, for <italic>t</italic> &#x02265; 0 we get</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>J</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>and for <italic>t</italic> &#x0003C; 0, <italic>I</italic>(<italic>t</italic>) &#x0003D; 0. Each spike induces a step increase in the current which decays exponentially with time constant &#x003C4;<sub><italic>I</italic></sub>. Taking the derivative of both sides with respect to <italic>t</italic> gives</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E8"><label>(8)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:mi>J</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Applying the forward Euler method to the differential equation for &#x00394;<italic>t</italic> &#x0003D; 1 and <italic>t</italic> &#x02265; 0, <italic>t</italic> &#x02208; &#x02115; we get</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M14"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>J</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>s</italic>[<italic>t</italic>] is zero unless there is an incoming spike on the synapse, in which case it is one. Here, <italic>s</italic>[0] &#x0003D; 1 and <italic>s</italic>[<italic>t</italic>] &#x0003D; 0 for <italic>t</italic> &#x0003E; 0. With this we have simply incorporated the initial condition into the update equation. Note that we have switched from a continuous [e.g., <italic>I</italic>(<italic>t</italic>)] to discrete (e.g., <italic>I</italic>[<italic>t</italic>]) time formulation, where &#x00394;<italic>t</italic> &#x0003D; 1 and <italic>t</italic> is unitless.</p>
<p><monospace>Loihi</monospace> has a decay value &#x003B4;<sup><italic>I</italic></sup>, which is inversely proportional to &#x003C4;<sub><italic>I</italic></sub>, namely <inline-formula><mml:math id="M15"><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>/</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Swapping &#x003C4;<sub><italic>I</italic></sub> by &#x003B4;<sup><italic>I</italic></sup> reveals</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M16"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mi>J</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The weight <italic>J</italic> is defined <italic>via</italic> the mantissa <inline-formula><mml:math id="M17"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and exponent &#x00398; (see Section 3.1) such that the equation describing the synaptic input becomes (with indices)</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M18"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>6</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mo>&#x00398;</mml:mo></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:mstyle displaystyle="true"><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>s</italic><sub><italic>j</italic></sub>[<italic>t</italic>] &#x02208; {0, 1} is the spike state of the <italic>j</italic><sup><italic>th</italic></sup> input neuron. Please note that Equation (2.2.1) is identical to the <monospace>Loihi</monospace> documentation.</p>
<p>From this we can conclude that the implementation of synaptic input on Loihi is equivalent to evolving the LIF synaptic input differential equation with the forward Euler numerical integration method (see <xref ref-type="fig" rid="F1">Figure 1A1</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Input trace of a single synapse (left) and voltage trace (right) of a neuron. The neuron receives randomly timed excitatory and inhibitory input spikes. The emulator (yellow) matches <monospace>Loihi</monospace> (blue) in both cases perfectly. Note that <monospace>Loihi</monospace> uses the voltage register to count refractory time, which results in a functionally irrelevant difference after a spike, e.g time step 17 in A2 (see <xref ref-type="supplementary-material" rid="SM1">Appendix 9.2.2</xref>). <bold>(B)</bold> Network simulation with 400 excitatory (indices 100 &#x02212; 500) and 100 inhibitory (indices 0 &#x02212; 100) neurons. The network is driven by noise from an input population of 40 Poisson spike generators with a connection probability of 0.05. All spikes match exactly between the emulator and <monospace>Loihi</monospace> for all time steps. The figure shows the last 400 time steps from a simulation with 100, 000 time steps.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0001.tif"/>
</fig>
</sec>
<sec>
<title>2.2.2. Voltage</title>
<p>It is straightforward to perform the same analysis as above for the voltage equation. We consider the subthreshold voltage dynamics for a single neuron and can therefore ignore the reset term <inline-formula><mml:math id="M19"><mml:msubsup><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> from Equation (1), leaving us with</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M20"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mi>v</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Applying forward Euler gives</p>
<disp-formula id="E13"><label>(13)</label><mml:math id="M21"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>v</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>v</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>v</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Again, to compare with the <monospace>Loihi</monospace> documentation we need to swap the time constant &#x003C4;<sub><italic>v</italic></sub> by a voltage decay parameter, &#x003B4;<sup><italic>v</italic></sup>, which is inversely proportional to the time constant, the same as above for synaptic input. Plugging in <inline-formula><mml:math id="M22"><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>/</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> leads to</p>
<disp-formula id="E14"><label>(14)</label><mml:math id="M23"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>v</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>v</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>By introducing a bias term, the voltage update becomes</p>
<disp-formula id="E15"><label>(15)</label><mml:math id="M24"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">bias</mml:mtext></mml:mrow></mml:msubsup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Equation (15) agrees with the <monospace>Loihi</monospace> documentation. Like the synaptic input, the voltage implementation on <monospace>Loihi</monospace> is equivalent to updating the LIF voltage differential equation using forward Euler numerical integration (see <xref ref-type="fig" rid="F1">Figure 1A2</xref>).</p>
</sec>
<sec>
<title>2.2.3. Integer precision</title>
<p><monospace>Loihi</monospace> uses integer precision. So the mathematical operations in the update equations above are to be understood in terms of integer arithmetic. In particular, for the synaptic input and voltage equations the emulator uses <italic>round away from zero</italic>, which can be defined as</p>
<disp-formula id="E16"><label>(16)</label><mml:math id="M25"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">round</mml:mtext></mml:mrow></mml:msub><mml:mo>: =</mml:mo><mml:mo class="qopname">sign</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>&#x02308;</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>&#x02309;</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x02308;&#x000B7;&#x02309; is the ceiling function and sign(&#x000B7;) the sign function.</p>
</sec>
</sec>
<sec>
<title>2.3. Summary</title>
<p>We now have all of the pieces required to understand and emulate a spiking unit from <monospace>Loihi</monospace>. Evolving the differential equations for the current-based LIF model with the forward Euler method and using the appropriate rounding (see Section 2.2.3) and update schedule (see Section 4.1 and <xref ref-type="supplementary-material" rid="SM1">Appendix 9.2.1</xref>) is enough to exactly reproduce <monospace>Loihi</monospace>&#x00027;s behavior. This procedure is summarized in <xref ref-type="table" rid="T3">Algorithm 1</xref> and an exact match between <monospace>Loihi</monospace> and an implementation for a single unit in <monospace>Brian</monospace> is shown in <xref ref-type="fig" rid="F1">Figure 1A</xref>. Please note that during the refractory period <monospace>Loihi</monospace> uses the voltage trace to count elapsed time (see <xref ref-type="fig" rid="F1">Figure 1A2</xref>, <xref ref-type="supplementary-material" rid="SM1">Appendix 9.2.2</xref>), while in the emulator the voltage is simply clamped to zero.</p>
<table-wrap position="float" id="T3">
<label>Algorithm 1</label>
<caption><p>Loihi single neuron emulator.</p></caption>
<graphic xlink:href="fninf-16-1015624-i0001.tif"/>
</table-wrap>
</sec>
</sec>
<sec id="s3">
<title>3. Network and plasticity</title>
<p>We now have a working implementation of <monospace>Loihi</monospace>&#x00027;s spiking unit. In the next step, we need to connect these units up into networks. And if the network should be able to learn online, connections between units should be plastic. In this section, we review how weights are defined on <monospace>Loihi</monospace> and how learning rules are applied. This includes the calculation of pre- and post-synaptic traces. Based on this, we outline how these features are implemented in the emulator.</p>
<sec>
<title>3.1. Synaptic weights</title>
<p>The synaptic weight consists of two parts, a weight mantissa <inline-formula><mml:math id="M32"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:math></inline-formula> and a weight exponent &#x00398; and is of the form <inline-formula><mml:math id="M33"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>6</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x00398;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>. However, in practice the calculation of the synaptic weight depends on bit shifts and its precision depends on a few parameters (see below). The weight exponent is a value between &#x02212;8 and 7 that scales the weight mantissa exponentially. Depending on the sign mode of the weight (excitatory, inhibitory, or mixed), the mantissa is an integer in the range <inline-formula><mml:math id="M34"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>255</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M35"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mn>255</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>, or <inline-formula><mml:math id="M36"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mn>256</mml:mn><mml:mo>,</mml:mo><mml:mn>254</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>, respectively. The possible values of the mantissa depend on the number of bits available for storing the weight and whether the sign mode is <italic>mixed</italic> or not. In particular, precision is defined as <inline-formula><mml:math id="M37"><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:math></inline-formula>, with</p>
<disp-formula id="E17"><label>(17)</label><mml:math id="M38"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>8</mml:mn><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>w</mml:mi><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">mixed</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This can intuitively be understood with a few examples. If the weight bits for the weight mantissa are set to the default value of <italic>n</italic><sub><italic>wb</italic></sub> &#x0003D; 8 bits, it can store 256 values between 0 and 255, i.e., the precision is then 2<sup>8&#x02212;(8 &#x02212; 0)</sup> &#x0003D; 2<sup>0</sup> &#x0003D; 1. If <italic>n</italic><sub><italic>wb</italic></sub> &#x0003D; 6 bits is chosen, we instead have a precision of 2<sup>8&#x02212;(6 &#x02212; 0)</sup> &#x0003D; 2<sup>2</sup> &#x0003D; 4 meaning there are 64 possible values for the weight mantissa, <inline-formula><mml:math id="M39"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mn>8</mml:mn><mml:mo>,</mml:mo><mml:mn>16</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mn>252</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>. If the sign mode is <italic>mixed</italic>, i.e., &#x003C3;<sub><italic>mixed</italic></sub> &#x0003D; 1, one bit is used to store the sign, which reduces the precision. Mixed mode enables both positive and negative weights, with weight mantissa between &#x02212;256 and 254. Assuming <italic>n</italic><sub><italic>wb</italic></sub> &#x0003D; 8 in mixed mode, precision is 2<sup>8&#x02212;(8 &#x02212; 1)</sup> &#x0003D; 2<sup>1</sup> &#x0003D; 2 and <inline-formula><mml:math id="M40"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mn>256</mml:mn><mml:mo>,</mml:mo><mml:mo>-</mml:mo><mml:mn>254</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mo>-</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mo>-</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mn>254</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></inline-formula>.</p>
<sec>
<title>3.1.1. Weight initialization</title>
<p>While the user can define an arbitrary weight mantissa within the allowed range, during initialization the value is rounded, given the precision, to the next possible value toward zero. This is achieved <italic>via</italic> bit shifting, that is the weight mantissa is shifted by</p>
<disp-formula id="E18"><label>(18)</label><mml:math id="M41"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">shifted</mml:mtext></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>&#x0226B;</mml:mo><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0226A;</mml:mo><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x0226B; and &#x0226A; are a right and left shift respectively. Afterwards the weight exponent is used to scale the weight according to</p>
<disp-formula id="E19"><label>(19)</label><mml:math id="M42"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msup><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">scaled</mml:mtext></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">shifted</mml:mtext></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>6</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mo>&#x00398;</mml:mo></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This value cannot be greater than 21 bits and is clipped if it exceeds this limit. Note that this only happens in one case for <inline-formula><mml:math id="M43"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mn>256</mml:mn></mml:math></inline-formula> and &#x00398; &#x0003D; 7. Finally the scaled value <italic>J</italic><sup>scaled</sup> is shifted again according to</p>
<disp-formula id="E20"><label>(20)</label><mml:math id="M44"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>J</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>J</mml:mi></mml:mrow><mml:mrow><mml:mtext class="textrm" mathvariant="normal">scaled</mml:mtext></mml:mrow></mml:msup><mml:mo>&#x0226B;</mml:mo><mml:mn>6</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0226A;</mml:mo><mml:mn>6</mml:mn><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>J</italic> is the final weight.</p>
<p>We provide a table with all 4096 possible weights depending on the mantissa and the exponent in a <monospace>Jupyter</monospace> notebook<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>. These values are provided for all three sign modes.</p>
</sec>
<sec>
<title>3.1.2. Plastic synapses</title>
<p>In the case of a <italic>static</italic> synapse, the initialized weight remains the same as long as the chip/emulator is running. Thus <italic>static</italic> synapses are fully described by the details above. For <italic>plastic</italic> synapses, the weight can change over time. This requires a method to ensure that changes to the weight adhere to its precision.</p>
<p>For <italic>plastic</italic> synapses, <italic>stochastic rounding</italic> is applied to the mantissa during each weight update. Whether the weight mantissa is rounded up or down depends on its proximity to the nearest possible values above and below, i.e.,</p>
<disp-formula id="E22"><label>(21)</label><mml:math id="M46"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mtext>RS</mml:mtext><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x000B7;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x0230A;</mml:mo> <mml:mrow><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;with&#x000A0;probability&#x000A0;</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x0230A;</mml:mo> <mml:mrow><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mtext>&#x0200B;</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>/</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x000B7;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x0230A;</mml:mo> <mml:mrow><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msup><mml:mtext>&#x0200B;</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;with&#x000A0;probability&#x000A0;</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x0230A;</mml:mo> <mml:mrow><mml:mrow><mml:mo>|</mml:mo> <mml:mi>x</mml:mi> <mml:mo>|</mml:mo></mml:mrow></mml:mrow> <mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mtext>&#x0200B;</mml:mtext><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>/</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow> </mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M47"><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mo>&#x000B7;</mml:mo></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:msub></mml:math></inline-formula> denotes rounding down to the nearest multiple of <inline-formula><mml:math id="M48"><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:math></inline-formula>. After the mantissa is rounded, it is scaled by the weight exponent and the right/left bit shifting is applied to the result to compute the actual weight <italic>J</italic>. How this is realized in the emulator is shown in Code Listing 3.</p>
<p>To test that our implementation of the weight update for <italic>plastic</italic> synapses matches <monospace>Loihi</monospace> for each possible number of weight bits, we compared the progression of the weights over time for a simple learning rule. The analysis is described in detail in <xref ref-type="supplementary-material" rid="SM1">Appendix 9.4</xref>.</p>
</sec>
</sec>
<sec>
<title>3.2. Pre- and post-synaptic traces</title>
<p>Pre- and post-synaptic traces are used for defining learning rules. <monospace>Loihi</monospace> provides two pre-synaptic traces <italic>x</italic><sub>1</sub>, <italic>x</italic><sub>2</sub> and three post-synaptic traces <italic>y</italic><sub>1</sub>, <italic>y</italic><sub>2</sub>, <italic>y</italic><sub>3</sub>. Pre-synaptic traces are increased by a constant value <inline-formula><mml:math id="M49"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, for <italic>i</italic> &#x02208; {1, 2}, if the pre-synaptic neuron spikes. The post-synaptic traces are increased by &#x00177;<sub><italic>j</italic></sub> for <italic>j</italic> &#x02208; {1, 2, 3}, accordingly. So-called <italic>dependency factors</italic> are available, indicating events like <italic>x</italic><sub>0</sub> &#x0003D; 1 if the pre-synaptic neuron spikes or <italic>y</italic><sub>0</sub> &#x0003D; 1 if the post-synaptic neuron spikes. These factors can be combined with the trace variables by addition, subtraction, or multiplication.</p>
<p>A simple spike-time dependent plasticity (STDP) rule with an asymmetric learning window would, for example, look like <italic>dw</italic> &#x0003D; <italic>x</italic><sub>1</sub>&#x000B7;<italic>y</italic><sub>0</sub>&#x02212;<italic>y</italic><sub>1</sub>&#x000B7;<italic>x</italic><sub>0</sub>. This rule leads to a positive change in the weight (<italic>dw</italic> &#x0003E; 0) if the pre-synaptic neuron fires shortly before the post-synaptic neuron (i.e., positive trace <italic>x</italic><sub>1</sub> &#x0003E; 0 when <italic>y</italic><sub>0</sub> &#x0003D; 1) and to a negative change (<italic>dw</italic> &#x0003C; 0) if the post-synaptic neuron fires shortly before the pre-synaptic neuron (i.e., positive trace <italic>y</italic><sub>1</sub> &#x0003E; 0 when <italic>x</italic><sub>0</sub> &#x0003D; 1). Thus, the time window in which changes may occur depends on the shape of the traces (i.e., impulse strength <inline-formula><mml:math id="M50"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, &#x00177;<sub><italic>i</italic></sub>; and decay &#x003C4;<sub><italic>x</italic><sub><italic>i</italic></sub></sub>, &#x003C4;<sub><italic>y</italic><sub><italic>j</italic></sub></sub>, see below).</p>
<p>For a sequence of spikes <italic>s</italic>[<italic>t</italic>] &#x02208; {0, 1}, a trace is defined as</p>
<disp-formula id="E23"><label>(22)</label><mml:math id="M51"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>s</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003B1; is a decay factor (see Davies et al., <xref ref-type="bibr" rid="B6">2018</xref>). This equation holds for presynaptic (<italic>x</italic><sub><italic>i</italic></sub>) and postsynaptic (<italic>y</italic><sub><italic>i</italic></sub>) traces. However, in practice, on <monospace>Loihi</monospace> one does not set &#x003B1; directly but instead decay time constants &#x003C4;<sub><italic>x</italic><sub><italic>i</italic></sub></sub> and &#x003C4;<sub><italic>y</italic><sub><italic>j</italic></sub></sub>.</p>
<p>In the implementation of the emulator we again assume a first order approximation for synaptic traces, akin to synaptic input and voltage. Under this assumption for the exponential decay, in Equation (22) we replace &#x003B1; by</p>
<disp-formula id="E24"><label>(23)</label><mml:math id="M52"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003B1;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Using this approximation gives reasonable results across a number of different &#x003C4;<sub><italic>x</italic><sub><italic>i</italic></sub></sub> and &#x003C4;<sub><italic>y</italic><sub><italic>i</italic></sub></sub> values (see <xref ref-type="supplementary-material" rid="SM1">Figure A2</xref>). While this essentially suffices, it could be improved by introducing an additional parameter, e.g., &#x003B2;, and optimizing &#x003B1;(&#x003C4;<sub><italic>x</italic><sub><italic>i</italic></sub></sub>, &#x003B2;).</p>
<p>Note that we have integer precision again. But different from the <italic>round away from zero</italic> applied in the neuron model, here <italic>stochastic rounding</italic> is used. Since traces are positive values between 0 and 127 with precision 1, the definition above in Equation (21) simplifies to the following</p>
<disp-formula id="E25"><label>(24)</label><mml:math id="M53"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mtext class="textrm" mathvariant="normal">RS</mml:mtext></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x02265;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">with probability&#x000A0;</mml:mtext><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">with probability&#x000A0;</mml:mtext><mml:mi>x</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Since this rounding procedure is probabilistic and the details of the random number generator are unknown, rounding introduces discrepancies when emulating <monospace>Loihi</monospace> on the computer. Further improvements are possible if more details of the chip&#x00027;s rounding mechanism were to be considered.</p>
</sec>
<sec>
<title>3.3. Summary</title>
<p>At this point we are able to connect neurons with synapses and build networks of neurons (see <xref ref-type="fig" rid="F1">Figure 1B</xref>). It was shown how the weights are handled, depending on the user defined number of weight bits or the sign mode. In addition, using the dynamics of the pre- and post synaptic traces, we can now define learning rules. Note that different from the neuron model, the synaptic traces cannot be reproduced exactly since the details of the random number generator, used for stochastic rounding, are unknown. However, <xref ref-type="fig" rid="F2">Figure 2</xref> shows that the synaptic traces emulated in <monospace>Brian</monospace> are very close to the original ones in <monospace>Loihi</monospace> and that the behavior of a standard asymmetric STDP rule can be reproduced with the emulator.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Comparing a STDP learning rule performed with the emulator and with <monospace>Loihi</monospace>. <bold>(A)</bold> Sketch showing the setup. <bold>(B)</bold> Synaptic trace for many trials showing the arithmetic mean and standard deviation. The inset shows the same data in a logarithmic scale. Note that every data point smaller than 10<sup>0</sup> shows the probability of rounding values between 0 and 1 up or down. <bold>(C)</bold> Relative difference <inline-formula><mml:math id="M54"><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>/</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">max</mml:mtext></mml:mstyle></mml:mrow></mml:msub></mml:math></inline-formula> for the plastic weight between the emulator, <inline-formula><mml:math id="M55"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and the <monospace>Loihi</monospace> implementation, <inline-formula><mml:math id="M56"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, for 50 simulations, <inline-formula><mml:math id="M57"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mstyle class="text"><mml:mtext class="textrm" mathvariant="normal">max</mml:mtext></mml:mstyle></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>255</mml:mn></mml:math></inline-formula>. <bold>(D)</bold> STDP weight change in respect to pre- and post-synaptic spike times, data shown for time steps 0 &#x02212; 2, 000 for visualization purposes.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0002.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<title>4. Loihi emulator based on Brian</title>
<p>Here we provide an overview over the emulator package and show some examples and results. This enables straightforward emulation of the basic features from <monospace>Loihi</monospace> as a sandbox for experimenters. Note that we have explicitly not included routing and mapping restrictions, like limitations for the number of neurons or the amount of synapses, as these depend on constraints such as the number of used <monospace>Loihi</monospace> chips.</p>
<sec>
<title>4.1. The package</title>
<p>The emulator package is available on <italic>PyPI</italic><xref ref-type="fn" rid="fn0006"><sup>6</sup></xref> and can be installed using the <monospace>pip</monospace> package manager. The emulator does not provide all functionality of the <monospace>Loihi</monospace> chip and software, but the main important aspects. An overview over all provided features is given in <xref ref-type="supplementary-material" rid="SM1">Table A1</xref> (<xref ref-type="supplementary-material" rid="SM1">Appendix</xref>). It contains six classes that extend the corresponding <monospace>Brian</monospace> classes. The classes are briefly introduced in the following. Further details can be taken from the code.<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref></p>
<sec>
<title>4.1.1. Network</title>
<p>The <monospace>LoihiNetwork</monospace> class extends the <monospace>Brian Network</monospace> class. It provides the same attributes as the original <monospace>Brian</monospace> class. The main difference is that it initializes the default clock, the integration methods and updates the schedule when a <monospace>Network</monospace> instance is created. Note that it is necessary to make explicitly use of the <monospace>LoihiNetwork</monospace>. It is not possible to use <monospace>Brian</monospace>&#x00027;s <italic>magic network</italic>.</p>
<p>Voltage and synaptic input are evolved with the forward Euler integration method, which was introduced in Section 2.2. Additionally a state updater was defined for the pre- and post-synaptic traces.</p>
<fig position="float">
<caption><p>Neuron model equations of the voltage and the synaptic input for <monospace>Brian</monospace>. It contains a <italic>round away from zero</italic> rounding.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0004.tif"/>
</fig>
<p>The default network update schedule for the computational order of the variables from <monospace>Brian</monospace> do not match the order of the computation on <monospace>Loihi</monospace>. The <monospace>Brian</monospace> update schedule is therefore altered when initializing the <monospace>LoihiNetwork</monospace>, more details are given in <xref ref-type="supplementary-material" rid="SM1">Appendix 9.2.1</xref>.</p>
</sec>
<sec>
<title>4.1.2. Neuron group</title>
<p>The <monospace>LoihiNeuronGroup</monospace> extends <monospace>Brian</monospace>&#x00027;s <monospace>NeuronGroup</monospace> class. Parameters of the <monospace>LoihiNeuronGroup</monospace> class are mostly different from the <monospace>Brian</monospace> class and are related to <monospace>Loihi</monospace>. When an instance is created, the given parameters are first checked to match requirements from <monospace>Loihi</monospace>. Finally, the differential equations to describe the neural system are shown in Code Listing 1. Since <monospace>Brian</monospace> does not provide a <italic>round away from zero</italic> functionality, we need to define it manually as an equation.</p>
</sec>
<sec>
<title>4.1.3. Synapses</title>
<p>The <monospace>LoihiSynapses</monospace> class extends the <monospace>Synapses</monospace> class from <monospace>Brian</monospace>. Again, most of the <monospace>Brian</monospace> parameters are not supported and instead <monospace>Loihi</monospace> parameters are available. When instantiating a <monospace>LoihiSynapses</monospace> object, the needed pre- and post-synaptic traces are included as equations (shown in Code Listing 2) as theoretically introduced in Section 3.2. Moreover, it is verified that the defined learning rule matches the available variables and operations supported by <monospace>Loihi</monospace>. The equations for the weight update is shown in Code Listing 3.</p>
<p>Since we have no access to the underlying mechanism and we cannot reproduce the pseudo-stochastic mechanisms exactly, we have to find a stochastic rounding that matches <monospace>Loihi</monospace> in distribution. Note that on <monospace>Loihi</monospace> the same network configuration leads to reproducible results (i.e., same rounding). Thus to compare the behavior of <monospace>Loihi</monospace> and the emulator, we simulate over a number of network settings and compare the distribution of the traces. <xref ref-type="fig" rid="F2">Figure 2B</xref> shows the match between the distributions. Note that with this, our implementation is always slightly different from the <monospace>Loihi</monospace> simulation, due to slight differences in rounding. In <xref ref-type="fig" rid="F2">Figure 2C</xref>, we show that these variations are constant and not diverging. In addition, <xref ref-type="fig" rid="F2">Figure 2D</xref> shows that the principle behavior of a learning rule is preserved.</p> 
<fig position="float">
<caption><p>Synaptic decay equation for <monospace>Brian</monospace>. Only the decay for <italic>x</italic>1 is shown, the decay for <italic>x</italic>2, <italic>y</italic>1, <italic>y</italic>2, <italic>y</italic>3 is applied analogously. It contains an approximation of the exponential decay and stochastic rounding.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0005.tif"/>
</fig>
<fig position="float">
<caption><p>Weight equations for <monospace>Brian</monospace>. The first part creates variables that allow terms of the plasticity rule to be evaluated only at the 2<sup><italic>k</italic></sup> time step. <italic>dw</italic> contains the user defined learning rule. The updated weight mantissa is adapted depending on the number of weight bits, which determines the precision. The weight mantissa is rounded with <italic>stochastic rounding</italic>. After clipping, the weight mantissa is updated and the actual weight is calculated.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0006.tif"/>
</fig>
</sec>
<sec>
<title>4.1.4. State monitor and Spike monitor</title>
<p>The <monospace>LoihiStateMonitor</monospace> class extends the <monospace>StateMonitor</monospace> class from <monospace>Brian</monospace>, while the <monospace>LoihiSpikeMonitor</monospace> class extends the <monospace>SpikeMonitor</monospace> class. Both classes support the most important parameters from their subclasses and update the schedule for the timing of the probes. This schedule update avoids shifts in the monitored variables compared to <monospace>Loihi</monospace>.</p>
</sec>
<sec>
<title>4.1.5. Spike generator group</title>
<p>The <monospace>LoihiSpikeGeneratorGroup</monospace> extends the <monospace>SpikeGeneratorGroup</monospace> class from <monospace>Brian</monospace>. This class only reduces the available parameters to avoid that users unintentionally change variables which would cause an unwanted emulation behavior.</p>
</sec>
</sec>
<sec>
<title>4.2. Examples</title>
<p>To demonstrate that the <monospace>Loihi</monospace> emulator works as expected, we provide three examples covering a single neuron, a recurrently connected spiking neural network, and the application of a learning rule. All three examples are available as <monospace>Jupyter</monospace> notebooks.<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref></p>
<sec>
<title>4.2.1. Neuron model</title>
<p>In a first test, we simulated a single neuron. The neuron receives randomly timed excitatory and inhibitory input spikes. <xref ref-type="fig" rid="F1">Figure 1A1</xref> shows the synaptic responses induced by the input spikes for the simulation using the <monospace>Loihi</monospace> chip and the <monospace>Loihi</monospace> emulator. The corresponding voltage traces are shown in <xref ref-type="fig" rid="F1">Figure 1A2</xref>. As expected, the synaptic input as well as the voltage match perfectly between the hardware and the emulator.</p>
</sec>
<sec>
<title>4.2.2. Network</title>
<p>In a second approach we applied a recurrently connected network of 400 excitatory and 100 inhibitory neurons with log-normal weights. The network gets noisy background input from 40 Poisson generators that are connected to the network with a probability of 0.05. As already shown by others, this setup leads to a highly chaotic behavior (Sompolinsky et al., <xref ref-type="bibr" rid="B30">1988</xref>; Van Vreeswijk and Sompolinsky, <xref ref-type="bibr" rid="B37">1996</xref>; Brunel, <xref ref-type="bibr" rid="B4">2000</xref>; London et al., <xref ref-type="bibr" rid="B15">2010</xref>). Despite the chaotic dynamics, spikes, voltages and synaptic inputs match perfectly for all neurons and over the whole time. The spiking pattern of the network is shown in <xref ref-type="fig" rid="F2">Figure 2B</xref>. All yellow (<monospace>Brian</monospace>) and blue (<monospace>Loihi</monospace>) dots match perfectly.</p>
</sec>
<sec>
<title>4.2.3. Learning</title>
<p>In the last experiment, we applied a simple STDP learning rule, as introduced in Equation (25), at a single plastic synapse. The experiment is sketched in <xref ref-type="fig" rid="F2">Figure 2A</xref>. One spike generator, denoted <italic>input</italic>, has a plastic connection to a neuron with a very low weight (<inline-formula><mml:math id="M58"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>128</mml:mn></mml:math></inline-formula>, &#x00398; &#x0003D; &#x02212;6), such that it has a negligible effect on the post-synaptic neuron. Another spike generator, denoted <italic>noise</italic>, has a large but static weight (<inline-formula><mml:math id="M59"><mml:mover accent="true"><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mn>254</mml:mn></mml:math></inline-formula>, &#x00398; &#x0003D; 0) to reliably induce post-synaptic spikes. <xref ref-type="fig" rid="F2">Figure 2B</xref> compares the distribution of traces between the emulator and <monospace>Loihi</monospace>. For this 400 trials were simulated.</p>
<p>We chose an asymmetric learning window for the STDP rule. The learning rule uses one pre-synaptic trace <italic>x</italic><sub>1</sub> (<inline-formula><mml:math id="M60"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>120</mml:mn></mml:math></inline-formula>, &#x003C4;<sub><italic>x</italic><sub>1</sub></sub> &#x0003D; 8) and one post-synaptic trace <italic>y</italic><sub>1</sub> (&#x00177;<sub>1</sub> &#x0003D; 120, &#x003C4;<sub><italic>y</italic><sub>1</sub></sub> &#x0003D; 8). In addition the dependency factors <italic>x</italic><sub>0</sub> &#x02208; 0, 1 and <italic>y</italic><sub>0</sub> &#x02208; 0, 1 are used, which indicate a pre- and post-synaptic spike respectively. Using these components, the learning rule is defined as</p>
<disp-formula id="E26"><label>(25)</label><mml:math id="M61"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>d</mml:mi><mml:mi>w</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Due to the stochastic rounding of the traces, differences in the weight changes occur, which are shown in <xref ref-type="fig" rid="F2">Figure 2C</xref>. Fortunately, the relative weight error remains low at a constant level of 0.027 &#x000B1; 0.027 and does not diverge, even over long simulation times, e.g., 100000 steps. Despite these variations, the STDP learning window of the emulator reproduces the behavior of the <monospace>Loihi</monospace> learning window, as shown in <xref ref-type="fig" rid="F2">Figure 2D</xref>.</p>
</sec>
</sec>
<sec>
<title>4.3. Performance tests</title>
<p>An important argument for the development of the <monospace>Brian2Loihi</monospace> emulator was&#x02014;besides improving the understanding of <monospace>Loihi</monospace>&#x00027;s functionality&#x02014;its usefulness for prototyping. When developing new models, algorithms, and applications, often large parameter scans are performed in which many networks with different parameter sets are initialized and executed. During this process, it is crucial to be able to read out spiking information to measure performance. For this reason we measured initialization and execution times both with and without spike monitoring on the <monospace>Loihi</monospace> chip and in the <monospace>Loihi</monospace> emulator.</p>
<p><xref ref-type="fig" rid="F3">Figure 3A</xref> compares <italic>initialization</italic> times for a randomly connected network with different sizes. Networks were stimulated with background noise to maintain a consistent firing rate. Note that more details about the network implementation are provided in <xref ref-type="supplementary-material" rid="SM1">Appendix 9.2.2</xref>. From the figure, it is clear that <monospace>Loihi</monospace> takes much more time to setup the network compared to the emulator based on <monospace>Brian</monospace>. If no spiking information is read out from the network during simulation, the result is quite similar, as shown in <xref ref-type="fig" rid="F3">Figure 3B</xref>. <monospace>Brian2Loihi</monospace> reduces the initialization time drastically, in particular for larger networks. This boost in initialization time is highly valuable for parameter scans across many network configurations.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Comparing performance of the <monospace>Loihi</monospace> emulator with the <monospace>Loihi</monospace> chip. The initialization time <bold>(A,B)</bold> and execution time <bold>(C,D)</bold> for <monospace>Loihi</monospace>-based (blue) and emulator-based (orange) simulations was compared for different network sizes. In one case spikes were read out for all neurons and time steps <bold>(A,C)</bold> and in a second case no spikes were read out <bold>(B,D)</bold>. The <monospace>Brian</monospace>-based simulation using <monospace>Brian2Loihi</monospace> had faster initialization times across all network sizes, both with and without spike monitoring. For the execution time, a <monospace>Brian</monospace>-based simulation was only faster when a read out was defined. If no spikes were read out, <monospace>Loihi</monospace>-based execution is faster. Execution time both on the <monospace>Loihi</monospace> chip and using the emulator increase with network size. Points show the mean and shaded areas show the standard deviation over 5 trials.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fninf-16-1015624-g0003.tif"/>
</fig>
<p>We were also interested in the comparison for the <italic>execution</italic> times of the simulation. <xref ref-type="fig" rid="F3">Figure 3C</xref> compares <monospace>Loihi</monospace>- and <monospace>Brian</monospace>-based execution times if all spikes were read out. Clearly, <monospace>Brian2Loihi</monospace> is much faster and the difference becomes larger as the network size grows. However, if no read out is performed, <xref ref-type="fig" rid="F3">Figure 3D</xref> shows that in this case <monospace>Loihi</monospace> is faster in executing the simulation across all network sizes. Therefore, <monospace>Brian2Loihi</monospace> is more efficient for prototyping networks, when we depend on analyzing comprehensive data from the networks&#x00027; behavior. For applications where a read out is not important or only few spikes must be read out, execution on <monospace>Loihi</monospace> is faster.</p>
<p>This underlines the significance of the <monospace>Brian2Loihi</monospace> emulator for prototyping on one hand and shows the potential of <monospace>Loihi</monospace> for large and long-term network simulations on the other hand. Note, however, that due to longer initialization times on <monospace>Loihi</monospace>, faster execution times are likely beneficial only if network initialization must not be performed often, readout is minimal, and the simulation time is long. In many cases, choosing a <monospace>Brian</monospace>-based simulation for development and a <monospace>Loihi</monospace>-based simulation for productive use cases could be an efficient combination in our view.</p>
</sec>
<sec>
<title>4.4. Applications</title>
<p>As a starting point for working with the emulator beyond the examples above, here we briefly describe two more complex applications implemented using the emulator. The code is openly available.</p>
<sec>
<title>4.4.1. Anisotropic network</title>
<p>In a recent study, we showed that a recurrently connected neural network with spatially inhomogeneous locally correlated connectivity (i.e., &#x0201C;the anisotropic network&#x0201D;, for original model see Spreizer et al., <xref ref-type="bibr" rid="B32">2019</xref>) could be implemented on <monospace>Loihi</monospace> to generate noise-robust trajectories for robotic movements (Michaelis et al., <xref ref-type="bibr" rid="B18">2020</xref>). This biologically plausible network model can generate stable sequences of neural activity on the timescale of behavior, making it interesting for both neuroscience and for neuromorphic applications. We implemented this network in the <monospace>Loihi</monospace> emulator and made it publicly available on GitHub.<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref></p>
</sec>
<sec>
<title>4.4.2. SSSP</title>
<p>The goal of the Single Source Shortest Path (SSSP) problem is to find the shortest path from a start node to a target node in a given graph. Spiking neuronal networks can solve the problem through a wave front algorithm (Ponulak and Hopfield, <xref ref-type="bibr" rid="B24">2013</xref>). Within this algorithm a wave of spikes propagates through a network of neurons that acts to represent the graph. The algorithm stops when the target neuron spikes. To enable path back tracing a local learning rule alters the weights during the wave propagation phase accordingly. An implementation using the <monospace>Loihi</monospace> emulator is available on GitHub.<xref ref-type="fn" rid="fn0010"><sup>10</sup></xref></p>
<p>Furthermore, a new type of the SSSP algorithm for neuromorphic hardware was developed using the <monospace>Loihi</monospace> emulator, the so-called add-and-minimize (AM) algorithm (Michaelis, <xref ref-type="bibr" rid="B19">2022</xref>, <xref ref-type="supplementary-material" rid="SM1">Appendix 9.5</xref>). It is capable of solving the SSSP problem for larger graphs, especially when the costs of the edges have a higher resolution. The code is again publicly available.<xref ref-type="fn" rid="fn0011"><sup>11</sup></xref></p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s5">
<title>5. Discussion</title>
<p>This study was motivated by two goals. We hope to simplify the transfer of models to <monospace>Loihi</monospace> and therefore developed a <monospace>Loihi</monospace> emulator for <monospace>Brian</monospace>, featuring many functionalities of the <monospace>Loihi</monospace> chip. In the process of developing the emulator, we aimed to provide a deeper understanding of the functionality of the neuromorphic research chip <monospace>Loihi</monospace> by analyzing its neuron and synapse model, as well as synaptic plasticity.</p>
<p>We hope that the analysis of <monospace>Loihi</monospace>&#x00027;s spiking units has provided some insight into how <monospace>Loihi</monospace> computes. With the numerical integration method, numerical precision and related rounding method, as well as the update schedule, we were able to walk from the LIF neuron model down to the computations performed. For neurons and networks without plasticity we are able to emulate <monospace>Loihi</monospace> without error. Analyzing and implementing synaptic plasticity showed that, due to stochastic rounding, it is not possible to exactly replicate trial by trial behavior when it comes to learning. However, on average the weight changes induced by a learning rule are preserved.</p>
<p>The main benefit of the <monospace>Brian2Loihi</monospace> emulator lies in lowering the hurdle for the experimenter. Especially in neuroscience, many scientists are accustomed to neuron simulators and in particular <monospace>Brian</monospace> is widely used. It makes a deep dive into new software frameworks and hardware systems unnecessary. The emulator can be used for simple and fast prototyping, as it improves the initialization time in all cases drastically and the execution time, when a read out is used. In addition, hardware specific complications, like distributing neurons to cores, or constraints like potential limits on the number of available neurons or synapses, or on the speed or size of read-out, do not occur in the emulator. While this will surely improve with new generations of hardware and software in the upcoming years, they can already be ignored by using the emulator.</p>
<p>At this point it is important to note that not all <monospace>Loihi</monospace> features are included in the emulator, yet. In particular, the homeostasis mechanism, rewards, and tags for the learning rule are not included. In <xref ref-type="supplementary-material" rid="SM1">Table A1</xref>, we provide a comparison of all functionalities from <monospace>Loihi</monospace> with those available in the current state of the emulator. Development of this emulator is an open source project and we expect improvements and additions with time. Note that a follow up project, called <monospace>Brian2Lava</monospace> has already started.<xref ref-type="fn" rid="fn0012"><sup>12</sup></xref></p>
<p>An important vision for the future is to flexibly connect front-end development environments (e.g., <monospace>Brian</monospace>, NEST, Keras, TensorFlow) with various back-ends, like neuromorphic platforms (e.g., <monospace>Loihi</monospace>, SpiNNaker, BrainScaleS, Dynap-SE) or emulators for these platforms. PyNN (Davison et al., <xref ref-type="bibr" rid="B7">2009</xref>) is such an approach to unify different front-ends and back-ends in a more general way. Nengo (Bekolay et al., <xref ref-type="bibr" rid="B1">2014</xref>), as another approach, does not provide the use of other simulators, but allows several back-ends and focuses on higher level applications (DeWolf et al., <xref ref-type="bibr" rid="B8">2020</xref>). NxTF (Rueckauer et al., <xref ref-type="bibr" rid="B27">2021</xref>) is an API and compiler aimed at simplifying the efficient deployment of deep convolutional spiking neural networks on <monospace>Loihi</monospace> using an interface derived from Keras. We think that ideally, one could continue to work in their preferred front-end environment while a package maps their code to existing chips or computer-based emulators of these chips. We expect an interface along these lines will play an important role in the future of neuromorphic computing and want to contribute to this development with our <monospace>Brian2Loihi</monospace> emulator.</p>
<p>At least for now, with an emulator at hand, it is easier to prototype network models and assess whether an implementation on <monospace>Loihi</monospace> is worth considering. When getting started with neuromorphic hardware, to e.g., scale up models or speed up simulations, researchers familiar with <monospace>Brian</monospace> can directly deploy models prepared with the emulator. We hope that with this, others may find a smooth entry into the quickly emerging field of neuromorphic computing.</p>
</sec>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/<xref ref-type="supplementary-material" rid="SM1">Supplementary material</xref>, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec id="s7">
<title>Author contributions</title>
<p>CM, AL, and WO analyzed Loihi&#x00027;s neuron and synapse model, with a larger contribution from AL and tested and refined the emulator implementation. CM programmed the emulator. WO performed the simulations and created the main figures and edited and reviewed. CM and AL created the supplementary figures and wrote the text. CT acquired funding and supervised the study. All authors reviewed the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="funding-information" id="s8">
<title>Funding</title>
<p>This study received funding from Intel Corporation. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.</p>

</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack><p>The work received funds by the Intel Corporation <italic>via</italic> a gift without restrictions. AL currently holds a Natural Sciences and Engineering Research Council of Canada PGSD-3 scholarship. We would like to thank Jonas Neuh&#x000F6;fer, Sebastian Schmitt, Andreas Wild, and Terrence C. Stewart for valuable discussions and input.</p>
</ack>
<sec sec-type="supplementary-material" id="s10">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fninf.2022.1015624/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fninf.2022.1015624/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bekolay</surname> <given-names>T.</given-names></name> <name><surname>Bergstra</surname> <given-names>J.</given-names></name> <name><surname>Hunsberger</surname> <given-names>E.</given-names></name> <name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Stewart</surname> <given-names>T.</given-names></name> <name><surname>Rasmussen</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Nengo: a Python tool for building large-scale functional brain models</article-title>. <source>Front. Neuroinformatics</source> <volume>7</volume>, <fpage>48</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2013.00048</pub-id><pub-id pub-id-type="pmid">24431999</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bouvier</surname> <given-names>M.</given-names></name> <name><surname>Valentian</surname> <given-names>A.</given-names></name> <name><surname>Mesquida</surname> <given-names>T.</given-names></name> <name><surname>Rummens</surname> <given-names>F.</given-names></name> <name><surname>Reyboz</surname> <given-names>M.</given-names></name> <name><surname>Vianello</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Spiking neural networks hardware implementations and challenges: a survey</article-title>. <source>ACM J. Emerg. Technol. Comput. Syst</source>. <volume>15</volume>, <fpage>1</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1145/3304103</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Petrovici</surname> <given-names>M. A.</given-names></name> <name><surname>Vogginger</surname> <given-names>B.</given-names></name> <name><surname>Ehrlich</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name> <name><surname>Millner</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems</article-title>. <source>Biol. Cybernet</source>. <volume>104</volume>, <fpage>263</fpage>&#x02013;<lpage>296</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-011-0435-9</pub-id><pub-id pub-id-type="pmid">21618053</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunel</surname> <given-names>N.</given-names></name></person-group> (<year>2000</year>). <article-title>Dynamics of networks of randomly connected excitatory and inhibitory spiking neurons</article-title>. <source>J. Physiol</source>. <volume>94</volume>, <fpage>445</fpage>&#x02013;<lpage>463</lpage>. <pub-id pub-id-type="doi">10.1016/S0928-4257(00)01084-6</pub-id><pub-id pub-id-type="pmid">11165912</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Wild</surname> <given-names>A.</given-names></name> <name><surname>Orchard</surname> <given-names>G.</given-names></name> <name><surname>Sandamirskaya</surname> <given-names>Y.</given-names></name> <name><surname>Guerra</surname> <given-names>G. A. F.</given-names></name> <name><surname>Joshi</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Advancing neuromorphic computing with Loihi: a survey of results and outlook</article-title>. <source>Proc. IEEE</source>. <volume>109</volume>, <fpage>911</fpage>&#x02013;<lpage>934</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2021.3067593</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Srinivasa</surname> <given-names>N.</given-names></name> <name><surname>Lin</surname> <given-names>T.</given-names></name> <name><surname>Chinya</surname> <given-names>G.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Choday</surname> <given-names>S. H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Loihi: a neuromorphic manycore processor with on-chip learning</article-title>. <source>IEEE Micro</source> <volume>38</volume>, <fpage>82</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1109/MM.2018.112130359</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davison</surname> <given-names>A. P.</given-names></name> <name><surname>Br&#x000FC;derle</surname> <given-names>D.</given-names></name> <name><surname>Eppler</surname> <given-names>J. M.</given-names></name> <name><surname>Kremkow</surname> <given-names>J.</given-names></name> <name><surname>Muller</surname> <given-names>E.</given-names></name> <name><surname>Pecevski</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>PyNN: a common interface for neuronal network simulators</article-title>. <source>Front. Neuroinformatics</source> <volume>2</volume>, <fpage>11</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.11.011.20080</pub-id><pub-id pub-id-type="pmid">19194529</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Jaworski</surname> <given-names>P.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Nengo and low-power AI hardware for robust, embedded neurorobotics</article-title>. <source>Front. Neurorobot</source>. 14, 568359. <pub-id pub-id-type="doi">10.3389/fnbot.2020.568359</pub-id><pub-id pub-id-type="pmid">33162886</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Stewart</surname> <given-names>T. C.</given-names></name> <name><surname>Slotine</surname> <given-names>J.-J.</given-names></name> <name><surname>Eliasmith</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>A spiking neural model of adaptive arm control</article-title>. <source>Proc. R. Soc. B Biol. Sci</source>. 283, 20162134. <pub-id pub-id-type="doi">10.1098/rspb.2016.2134</pub-id><pub-id pub-id-type="pmid">27903878</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Furber</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Large-scale neuromorphic computing systems</article-title>. <source>J. Neural Eng</source>. 13, 051001. <pub-id pub-id-type="doi">10.1088/1741-2560/13/5/051001</pub-id><pub-id pub-id-type="pmid">27529195</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Furber</surname> <given-names>S. B.</given-names></name> <name><surname>Galluppi</surname> <given-names>F.</given-names></name> <name><surname>Temple</surname> <given-names>S.</given-names></name> <name><surname>Plana</surname> <given-names>L. A.</given-names></name></person-group> (<year>2014</year>). <article-title>The spinnaker project</article-title>. <source>Proc. IEEE</source> <volume>102</volume>, <fpage>652</fpage>&#x02013;<lpage>665</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2014.2304638</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Kistler</surname> <given-names>W. M.</given-names></name> <name><surname>Naud</surname> <given-names>R.</given-names></name> <name><surname>Paninski</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). <source>Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9781107447615</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gr&#x000FC;ning</surname> <given-names>A.</given-names></name> <name><surname>Bohte</surname> <given-names>S. M.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;Spiking neural networks: principles and challenges,&#x0201D;</article-title> in <source>ESANN</source> (<publisher-loc>Bruges</publisher-loc>).</citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>C.-K.</given-names></name> <name><surname>Wild</surname> <given-names>A.</given-names></name> <name><surname>Chinya</surname> <given-names>G. N.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Davies</surname> <given-names>M.</given-names></name> <name><surname>Lavery</surname> <given-names>D. M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Programming spiking neural networks on Intels Loihi</article-title>. <source>Computer</source> <volume>51</volume>, <fpage>52</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2018.157113521</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>London</surname> <given-names>M.</given-names></name> <name><surname>Roth</surname> <given-names>A.</given-names></name> <name><surname>Beeren</surname> <given-names>L.</given-names></name> <name><surname>H&#x000E4;usser</surname> <given-names>M.</given-names></name> <name><surname>Latham</surname> <given-names>P. E.</given-names></name></person-group> (<year>2010</year>). <article-title>Sensitivity to perturbations <italic>in vivo</italic> implies high noise and suggests rate coding in cortex</article-title>. <source>Nature</source> <volume>466</volume>, <fpage>123</fpage>&#x02013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.1038/nature09086</pub-id><pub-id pub-id-type="pmid">20596024</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Qu</surname> <given-names>C.</given-names></name> <name><surname>Lee</surname> <given-names>M. K. F.</given-names></name> <name><surname>Tang</surname> <given-names>W. T.</given-names></name> <name><surname>Wong</surname> <given-names>W.-F.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>An FPGA-based hardware emulator for neuromorphic chip with RRAM</article-title>. <source>IEEE Trans. Comput. Aided Design Integr. Circ. Syst</source>. <volume>39</volume>, <fpage>438</fpage>&#x02013;<lpage>450</lpage>. <pub-id pub-id-type="doi">10.1109/TCAD.2018.2889670</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Michaelis</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>PeleNet: a reservoir computing framework for Loihi</article-title>. <source>arXiv preprint arXiv:2011.12338</source>. <pub-id pub-id-type="doi">10.48550/ARXIV.2011.12338</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Michaelis</surname> <given-names>C.</given-names></name> <name><surname>Lehr</surname> <given-names>A. B.</given-names></name> <name><surname>Tetzlaff</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Robust trajectory generation for robotic control on the neuromorphic research chip Loihi</article-title>. <source>Front. Neurorobot</source>. 14, 589532. <pub-id pub-id-type="doi">10.3389/fnbot.2020.589532</pub-id><pub-id pub-id-type="pmid">33324191</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="thesis"><person-group person-group-type="author"><name><surname>Michaelis</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <source>Think local, act global: robust and real-time movement encoding in spiking neural networks using neuromorphic hardware</source> (<publisher-loc>Ph.D. thesis</publisher-loc>). G&#x000F6;ttingen: University Goettingen Repository.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>E.</given-names></name> <name><surname>Schmitt</surname> <given-names>S.</given-names></name> <name><surname>Mauch</surname> <given-names>C.</given-names></name> <name><surname>Billaudelle</surname> <given-names>S.</given-names></name> <name><surname>Gr&#x000FC;bl</surname> <given-names>A.</given-names></name> <name><surname>G&#x000FC;ttler</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2020b</year>). <article-title>The operating system of the neuromorphic BrainScaleS-1 system</article-title>. <source>arXiv preprint arXiv:2003.13749</source>. <pub-id pub-id-type="doi">10.48550/ARXIV.2003.13749</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>E.</given-names></name> <name><surname>Mauch</surname> <given-names>C.</given-names></name> <name><surname>Spilger</surname> <given-names>P.</given-names></name> <name><surname>Breitwieser</surname> <given-names>O. J.</given-names></name> <name><surname>Kl&#x000E4;hn</surname> <given-names>J.</given-names></name> <name><surname>St&#x000F6;ckel</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2020a</year>). <article-title>Extending BrainScaleS OS for BrainScaleS-2</article-title>. <source>arXiv preprint arXiv:2003.13750</source>. <pub-id pub-id-type="doi">10.48550/ARXIV.2003.13750</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petrovici</surname> <given-names>M. A.</given-names></name> <name><surname>Vogginger</surname> <given-names>B.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>P.</given-names></name> <name><surname>Breitwieser</surname> <given-names>O.</given-names></name> <name><surname>Lundqvist</surname> <given-names>M.</given-names></name> <name><surname>Muller</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms</article-title>. <source>PLoS ONE</source> <volume>9</volume>, <fpage>e108590</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0108590</pub-id><pub-id pub-id-type="pmid">25303102</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfeiffer</surname> <given-names>M.</given-names></name> <name><surname>Pfeil</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep learning with spiking neurons: opportunities and challenges</article-title>. <source>Front. Neurosci</source>. 12, 774. <pub-id pub-id-type="doi">10.3389/fnins.2018.00774</pub-id><pub-id pub-id-type="pmid">30410432</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ponulak</surname> <given-names>F.</given-names></name> <name><surname>Hopfield</surname> <given-names>J.</given-names></name></person-group> (<year>2013</year>). <article-title>Rapid, parallel path planning by propagating wavefronts of spiking neural activity</article-title>. <source>Front. Comput. Neurosci</source>. 7, 98. <pub-id pub-id-type="doi">10.3389/fncom.2013.00098</pub-id><pub-id pub-id-type="pmid">23882213</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajendran</surname> <given-names>B.</given-names></name> <name><surname>Sebastian</surname> <given-names>A.</given-names></name> <name><surname>Schmuker</surname> <given-names>M.</given-names></name> <name><surname>Srinivasa</surname> <given-names>N.</given-names></name> <name><surname>Eleftheriou</surname> <given-names>E.</given-names></name></person-group> (<year>2019</year>). <article-title>Low-power neuromorphic hardware for signal processing applications: a review of architectural and system-level design approaches</article-title>. <source>IEEE Signal Process. Mag</source>. <volume>36</volume>, <fpage>97</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2019.2933719</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rhodes</surname> <given-names>O.</given-names></name> <name><surname>Bogdan</surname> <given-names>P. A.</given-names></name> <name><surname>Brenninkmeijer</surname> <given-names>C.</given-names></name> <name><surname>Davidson</surname> <given-names>S.</given-names></name> <name><surname>Fellows</surname> <given-names>D.</given-names></name> <name><surname>Gait</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>sPyNNaker: a software package for running PyNN simulations on SpiNNaker</article-title>. <source>Front. Neurosci</source>. 12, 816. <pub-id pub-id-type="doi">10.3389/fnins.2018.00816</pub-id><pub-id pub-id-type="pmid">30524220</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rueckauer</surname> <given-names>B.</given-names></name> <name><surname>Bybee</surname> <given-names>C.</given-names></name> <name><surname>Goettsche</surname> <given-names>R.</given-names></name> <name><surname>Singh</surname> <given-names>Y.</given-names></name> <name><surname>Mishra</surname> <given-names>J.</given-names></name> <name><surname>Wild</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>NxTF: an API and compiler for deep spiking neural networks on intel Loihi</article-title>. <source>arXiv preprint arXiv:2101.04261</source>. <pub-id pub-id-type="doi">10.1145/3501770</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sawada</surname> <given-names>J.</given-names></name> <name><surname>Akopyan</surname> <given-names>F.</given-names></name> <name><surname>Cassidy</surname> <given-names>A. S.</given-names></name> <name><surname>Taba</surname> <given-names>B.</given-names></name> <name><surname>Debole</surname> <given-names>M. V.</given-names></name> <name><surname>Datta</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>&#x0201C;Truenorth ecosystem for brain-inspired computing: scalable systems, software, and applications,&#x0201D;</article-title> in <source>SC&#x00027;16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis</source> (<publisher-loc>Salt Lake City, UT</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>130</fpage>&#x02013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1109/SC.2016.11</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schuman</surname> <given-names>C. D.</given-names></name> <name><surname>Potok</surname> <given-names>T. E.</given-names></name> <name><surname>Patton</surname> <given-names>R. M.</given-names></name> <name><surname>Birdwell</surname> <given-names>J. D.</given-names></name> <name><surname>Dean</surname> <given-names>M. E.</given-names></name> <name><surname>Rose</surname> <given-names>G. S.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>A survey of neuromorphic computing and neural networks in hardware</article-title>. <source>arXiv preprint arXiv:1705.06963</source>. <pub-id pub-id-type="doi">10.48550/ARXIV.1705.06963</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sompolinsky</surname> <given-names>H.</given-names></name> <name><surname>Crisanti</surname> <given-names>A.</given-names></name> <name><surname>Sommers</surname> <given-names>H.-J.</given-names></name></person-group> (<year>1988</year>). <article-title>Chaos in random neural networks</article-title>. <source>Phys. Rev. Lett</source>. 61, 259. <pub-id pub-id-type="doi">10.1103/PhysRevLett.61.259</pub-id><pub-id pub-id-type="pmid">10039285</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Spilger</surname> <given-names>P.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>E.</given-names></name> <name><surname>Emmel</surname> <given-names>A.</given-names></name> <name><surname>Leibfried</surname> <given-names>A.</given-names></name> <name><surname>Mauch</surname> <given-names>C.</given-names></name> <name><surname>Pehle</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;hxtorch: Pytorch for brainscales-2,&#x0201D;</article-title> in <source>IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning</source> (<publisher-loc>Ghent</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>189</fpage>&#x02013;<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-66770-2_14</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spreizer</surname> <given-names>S.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Kumar</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>From space to time: spatial inhomogeneities lead to the emergence of spatiotemporal sequences in spiking neuronal networks</article-title>. <source>PLoS Comput. Biol</source>. 15, e1007432. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1007432</pub-id><pub-id pub-id-type="pmid">31652259</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stagsted</surname> <given-names>R.</given-names></name> <name><surname>Vitale</surname> <given-names>A.</given-names></name> <name><surname>Binz</surname> <given-names>J.</given-names></name> <name><surname>Bonde Larsen</surname> <given-names>L.</given-names></name> <name><surname>Sandamirskaya</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;Towards neuromorphic control: a spiking neural network based pid controller for UAV,&#x0201D;</article-title> in <source>Proceedings of Robotics: Science and Systems</source> (<publisher-loc>Corvallis, OR</publisher-loc>). <pub-id pub-id-type="doi">10.15607/RSS.2020.XVI.074</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stimberg</surname> <given-names>M.</given-names></name> <name><surname>Brette</surname> <given-names>R.</given-names></name> <name><surname>Goodman</surname> <given-names>D. F.</given-names></name></person-group> (<year>2019</year>). <article-title>Brian 2, an intuitive and efficient neural simulator</article-title>. <source>eLife</source> <volume>8</volume>:<fpage>e47314</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.47314</pub-id><pub-id pub-id-type="pmid">31429824</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thakur</surname> <given-names>C. S.</given-names></name> <name><surname>Molin</surname> <given-names>J. L.</given-names></name> <name><surname>Cauwenberghs</surname> <given-names>G.</given-names></name> <name><surname>Indiveri</surname> <given-names>G.</given-names></name> <name><surname>Kumar</surname> <given-names>K.</given-names></name> <name><surname>Qiao</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Large-scale neuromorphic spiking array processors: a quest to mimic the brain</article-title>. <source>Front. Neurosci</source>. 12, 891. <pub-id pub-id-type="doi">10.3389/fnins.2018.00891</pub-id><pub-id pub-id-type="pmid">30666180</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Valancius</surname> <given-names>S.</given-names></name> <name><surname>Richter</surname> <given-names>E.</given-names></name> <name><surname>Purdy</surname> <given-names>R.</given-names></name> <name><surname>Rockowitz</surname> <given-names>K.</given-names></name> <name><surname>Inouye</surname> <given-names>M.</given-names></name> <name><surname>Mack</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;FPGA based emulation environment for neuromorphic architectures,&#x0201D;</article-title> in <source>2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)</source> (<publisher-loc>New Orleans, LA</publisher-loc>). <pub-id pub-id-type="doi">10.1109/IPDPSW50202.2020.00022</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Vreeswijk</surname> <given-names>C.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>1996</year>). <article-title>Chaos in neuronal networks with balanced excitatory and inhibitory activity</article-title>. <source>Science</source> <volume>274</volume>, <fpage>1724</fpage>&#x02013;<lpage>1726</lpage>. <pub-id pub-id-type="doi">10.1126/science.274.5293.1724</pub-id><pub-id pub-id-type="pmid">8939866</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>A. R.</given-names></name> <name><surname>Dean</surname> <given-names>M. E.</given-names></name> <name><surname>Plank</surname> <given-names>J. S.</given-names></name> <name><surname>Rose</surname> <given-names>G. S.</given-names></name></person-group> (<year>2019</year>). <article-title>A review of spiking neuromorphic hardware communication systems</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>135606</fpage>&#x02013;<lpage>135620</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2941772</pub-id><pub-id pub-id-type="pmid">26422422</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="https://www.nengo.ai/nengo-loihi/">https://www.nengo.ai/nengo-loihi/</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="https://code.ini.uzh.ch/yigit/NICE-workshop-2021">https://code.ini.uzh.ch/yigit/NICE-workshop-2021</ext-link></p></fn>
<fn id="fn0003"><p><sup>3</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/andrewlehr/loihi_parameter_tuning_dashboard">https://github.com/andrewlehr/loihi_parameter_tuning_dashboard</ext-link></p></fn>
<fn id="fn0004"><p><sup>4</sup>The documentation for the <monospace>NxSDK</monospace> is available from Intel on request.</p></fn>
<fn id="fn0005"><p><sup>5</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/sagacitysite/brian2_loihi_utils/blob/main/algorithm/02_weight-calculation.ipynb">https://github.com/sagacitysite/brian2_loihi_utils/blob/main/algorithm/02_weight-calculation.ipynb</ext-link></p></fn>
<fn id="fn0006"><p><sup>6</sup><ext-link ext-link-type="uri" xlink:href="https://pypi.org/project/brian2-loihi/">https://pypi.org/project/brian2-loihi/</ext-link></p></fn>
<fn id="fn0007"><p><sup>7</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/sagacitysite/brian2_loihi/">https://github.com/sagacitysite/brian2_loihi/</ext-link></p></fn>
<fn id="fn0008"><p><sup>8</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/sagacitysite/brian2_loihi_utils/tree/main/examples">https://github.com/sagacitysite/brian2_loihi_utils/tree/main/examples</ext-link></p></fn>
<fn id="fn0009"><p><sup>9</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/andrewlehr/Brian2Loihi_SpreizerNet">https://github.com/andrewlehr/Brian2Loihi_SpreizerNet</ext-link></p></fn>
<fn id="fn0010"><p><sup>10</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/Winnus/Brian2Loihi_SSSP">https://github.com/Winnus/Brian2Loihi_SSSP</ext-link></p></fn>
<fn id="fn0011"><p><sup>11</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/elena-off/sssp-loihiemulator">https://github.com/elena-off/sssp-loihiemulator</ext-link></p></fn>
<fn id="fn0012"><p><sup>12</sup><ext-link ext-link-type="uri" xlink:href="https://gitlab.com/tetzlab/brian2lava">https://gitlab.com/tetzlab/brian2lava</ext-link></p></fn>
</fn-group>

</back>
</article> 