<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Nanotechnol.</journal-id>
<journal-title>Frontiers in Nanotechnology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Nanotechnol.</abbrev-journal-title>
<issn pub-type="epub">2673-3013</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">801999</article-id>
<article-id pub-id-type="doi">10.3389/fnano.2021.801999</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Nanotechnology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Fault Injection Attacks in Spiking Neural Networks and Countermeasures</article-title>
<alt-title alt-title-type="left-running-head">Nagarajan et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Fault Injection in SNN</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Nagarajan</surname>
<given-names>Karthikeyan</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1366419/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Li</surname>
<given-names>Junde</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1609877/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ensan</surname>
<given-names>Sina Sayyah</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1570515/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kannan</surname>
<given-names>Sachhidh</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ghosh</surname>
<given-names>Swaroop</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1325961/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>School of Electrical Engineering and Computer Science</institution>, <institution>Penn State University</institution>, <addr-line>University Park</addr-line>, <addr-line>PA</addr-line>, <country>United&#x20;States</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Ampere Computing</institution>, <addr-line>Portland</addr-line>, <addr-line>OR</addr-line>, <country>United&#x20;States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1133172/overview">Ying-Chen (Daphne) Chen</ext-link>, Northern Arizona University, United&#x20;States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1147193/overview">Jayasimha Atulasimha</ext-link>, Virginia Commonwealth University, United&#x20;States</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/312402/overview">Yao-Feng Chang</ext-link>, Intel, United&#x20;States</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Karthikeyan Nagarajan, <email>kxn287@psu.edu</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Nanomaterials, a section of the journal Frontiers in Nanotechnology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>01</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>3</volume>
<elocation-id>801999</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>11</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Nagarajan, Li, Ensan, Kannan and Ghosh.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Nagarajan, Li, Ensan, Kannan and Ghosh</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Spiking Neural Networks (SNN) are fast emerging as an alternative option to Deep Neural Networks (DNN). They are computationally more powerful and provide higher energy-efficiency than DNNs. While exciting at first glance, SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that can be exploited by the adversaries. We explore global fault injection attacks using external power supply and laser-induced local power glitches on SNN designed using common analog neurons to corrupt critical training parameters such as spike amplitude and neuron&#x2019;s membrane threshold potential. We also analyze the impact of power-based attacks on the SNN for digit classification task and observe a worst-case classification accuracy degradation of &#x2212;85.65%. We explore the impact of various design parameters of SNN (e.g., learning rate, spike trace decay constant, and number of neurons) and identify design choices for robust implementation of SNN. We recover classification accuracy degradation by 30&#x2013;47% for a subset of power-based attacks by modifying SNN training parameters such as learning rate, trace decay constant, and neurons per layer. We also propose hardware-level defenses, e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area, and 25% power overhead. We also propose a dummy neuron-based detection of voltage fault injection at &#x223c;1% power and area overhead&#x20;each.</p>
</abstract>
<kwd-group>
<kwd>spiking neural network</kwd>
<kwd>security</kwd>
<kwd>fault injection</kwd>
<kwd>STDP</kwd>
<kwd>side channel attack</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Artificial Neural Networks (ANNs or NNs) that are inspired by the functionality of human brains consist of layers of neurons that are interconnected through synapses and can be used to approximate any computable function. The advent of neural networks in safety-critical domains such as autonomous driving (<xref ref-type="bibr" rid="B18">Kaiser et&#x20;al., 2016</xref>), healthcare (<xref ref-type="bibr" rid="B1">Azghadi et&#x20;al., 2020</xref>), Internet of Things (<xref ref-type="bibr" rid="B35">Whatmough et&#x20;al., 2018</xref>), and security (<xref ref-type="bibr" rid="B7">Cao et&#x20;al., 2015</xref>) warrants the need to investigate their security vulnerabilities and threats. An attack on a neural network can lead to undesirable or unsafe decisions in real-world applications (e.g., reduced accuracy or confidence in road sign identification during autonomous driving). These attacks can be initiated at either the production, training, or final application phases.</p>
<p>Spiking Neural Networks (SNNs) (<xref ref-type="bibr" rid="B21">Maass, 1997</xref>) are the third generation of neural networks. SNNs are emerging as an alternative to Deep Neural Networks (DNNs) since they are biologically plausible, computationally powerful (<xref ref-type="bibr" rid="B15">Heiberg et&#x20;al., 2018</xref>), and energy-efficient (<xref ref-type="bibr" rid="B26">Merolla et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B9">Davies et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B31">Tavanaei et&#x20;al., 2019</xref>). However, very limited research exists on the security of SNNs against adversarial attacks. Broadly, the attacks could be classified as (1) white box attacks where an attacker has complete knowledge of the SNN architecture, and (2) black box attacks where the attacker does not know the SNN architecture, network parameters, or training&#x20;data.</p>
<p>Multiple prior works such as (<xref ref-type="bibr" rid="B12">Goodfellow et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B19">Kurakin et&#x20;al., 2016</xref>), and (<xref ref-type="bibr" rid="B22">Madry et&#x20;al., 2017</xref>) investigate adversarial attacks on DNN, e.g., non-detectable changes to input data causing a classifier to mispredict with a higher probability and suggest countermeasures. The vulnerabilities/attacks of SNNs under a white box scenario, e.g., sensitivity to adversarial examples and a robust training mechanism for defense is proposed (<xref ref-type="bibr" rid="B2">Bagheri et&#x20;al., 2018</xref>). A white box fault injection attack is proposed (<xref ref-type="bibr" rid="B34">Venceslai et&#x20;al., 2020</xref>) in SNNs through adversarial input noise. A black box attack to generate adversarial input examples for SNNs to cause misprediction is proposed in (<xref ref-type="bibr" rid="B23">Marchisio et&#x20;al., 2020</xref>). However, these white and black box attacks on SNN do not study the effects of voltage/power-based fault injection attacks.</p>
<p>Prior works have shown that voltage fault injection (VFI) techniques can be used as an effective side channel attack to disrupt the execution flow of a system. In (<xref ref-type="bibr" rid="B3">Barenghi et&#x20;al., 2012</xref>), a fault injection technique is proposed for cryptographic devices that underpowers the device to introduce bit errors. In (<xref ref-type="bibr" rid="B5">Bozzato et&#x20;al., 2019</xref>), novel VFI techniques are proposed to inject glitches in popular microcontrollers from manufacturers such as STMicroelectronics and Texas Instruments. In (<xref ref-type="bibr" rid="B36">Zussa et&#x20;al., 2013</xref>), negative power supply glitch attack has been introduced on FPGA to cause timing constraint violations. Local voltage and clock glitching attacks have also been proposed using laser exposure. However, such studies are not performed for&#x20;SNN.</p>
<sec id="s1-1">
<title>Proposed Threat Model</title>
<p>There is a limited amount of research on SNN attacks (except adversarial input-based attacks). Similar to classical systems, the adversary can manipulate the supply voltage or inject voltage glitches in the SNN systems. This is likely for (1) an external adversary who has physical possession of the device or the power port, (2) an insider adversary with access to a power port or laser gun to inject the fault. In this paper, we study seven attacks under black box and white box scenarios.</p>
<sec id="s1-1-1">
<title>Black Box Attack</title>
<p>In this scenario (Attack 7 in <xref ref-type="sec" rid="s4-4">Section 4.4</xref>), the adversary affects the power supply of the entire system to (1) corrupt spiking amplitude of SNN neuron input and (2) disrupt SNN neuron&#x2019;s membrane functionality. The adversary does not need to know the SNN architecture but needs to control the external power supply (<italic>V</italic>
<sub>
<italic>DD</italic>
</sub>) to launch this attack. <xref ref-type="fig" rid="F1">Figure&#x20;1</xref> shows a high-level schematic of the proposed threat model against an SNN, where an input image to be classified is converted to spike trains and fed to the neuron layers. The objective is to degrade accuracy of the classified digit. Note that the neuron layers, neurons, and their interconnections shown in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref> just illustrate the proposed threat model. The SNN architecture actually implemented in this paper is explained in <xref ref-type="sec" rid="s4-1">Section&#x20;4.1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Threat model for power-based attacks on SNN.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g001.tif"/>
</fig>
</sec>
<sec id="s1-1-2">
<title>White Box Attacks</title>
<p>In this scenario, we consider the following cases (details in <xref ref-type="sec" rid="s4-2">Section 4.2</xref>) where the adversary is able to individually attack SNN layers and peripherals through localized laser-based power fault injection 1) Attack 1 where only peripherals, e.g., input current drivers are attacked, 2) Attack 2, 3, and 4 where individual SNN layers attacked partially to fully, i.e.,&#x20;0&#x2013;100%, 3) Attack 5 where all SNN layers affected (no peripherals), and 4) Attack 6 where the timing of the attacks on individual layers varies from 0 to 100% of training/inference&#x20;phase.</p>
<p>Contributions: In summary, we.<list list-type="simple">
<list-item>
<p>&#x2022;Present detailed analysis of two neuron models, namely, Axon Hillock neuron and voltage I&#x26;F amplifier neuron under global, local and fine-grain supply voltage variation</p>
</list-item>
<list-item>
<p>&#x2022;Propose seven power-based attack models against SNN designs under black box and white box settings</p>
</list-item>
<list-item>
<p>&#x2022;Analyze the impact of proposed attacks for digit classification&#x20;tasks</p>
</list-item>
<list-item>
<p>&#x2022;Analyze the sensitivity of various design parameters of SNN learning algorithm (STDP) to fault injection attack</p>
</list-item>
<list-item>
<p>&#x2022;Propose hardware defenses and a novel detection technique</p>
</list-item>
</list>
</p>
<p>In the remainder of the paper, <xref ref-type="sec" rid="s2">Section 2</xref> presents background on SNNs and neuron design, <xref ref-type="sec" rid="s3">Section 3</xref> proposes the attack models, Sections 4 and 5 present the analysis of the attack and countermeasures, respectively, <xref ref-type="sec" rid="s6">Section 6</xref> presents some discussion and, finally, <xref ref-type="sec" rid="s7">Section 7</xref> draws the conclusion.</p>
</sec>
</sec>
</sec>
<sec id="s2">
<title>2 Background</title>
<p>In this section, we present the overview of SNN and neuron designs (<xref ref-type="bibr" rid="B17">Indiveri et&#x20;al., 2011</xref>) that have been used in this&#x20;paper.</p>
<sec id="s2-1">
<title>2.1 Overview of Spiking Neural Network</title>
<p>SNNs consist of layers of spiking neurons that are connected to adjacent neurons using synaptic weights (<xref ref-type="fig" rid="F1">Figure&#x20;1</xref>). The neurons between adjacent layers exchange information in the form of spike trains. The critical parameters for SNN operation include the timing of the spikes and the strengths of the synaptic weights between neurons. Each neuron includes a membrane, whose potential increases when the neuron received an input spike. When this membrane potential reaches a pre-designed threshold, the neuron <italic>fires</italic> an output spike. Various neuron models such as I&#x26;F, Hodgkin-Huxley, and spike response exist with different membrane and spike-generation operations. In this work, we have implemented two flavors of I&#x26;F neuron to showcase the power-based attacks.</p>
</sec>
<sec id="s2-2">
<title>2.2 Neuron Model</title>
<p>In this work, we have used Leaky Integrate and Fire (LIF) neuron models where the temporal dynamics are represented by:<disp-formula id="e1">
<mml:math id="m1">
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">mem</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">rest</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>I</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(1)</label>
</disp-formula>
</p>
<p>Here, <italic>v</italic>(<italic>t</italic>) is the membrane potential, <italic>&#x3c4;</italic>
<sub>
<italic>mem</italic>
</sub> is the membrane time constant, <italic>v</italic>
<sub>
<italic>rest</italic>
</sub> is the resting potential and <italic>I</italic>(<italic>t</italic>) represents the summation of inputs from all synaptic inputs to the neuron. When the membrane voltage reaches a pre-designed <italic>vth</italic>(<italic>t</italic>), it fires an output spike and its membrane potential resets to <italic>v</italic>
<sub>
<italic>reset</italic>
</sub>. The neuron&#x2019;s membrane potential is fixed for a refractory period of <italic>&#x3b4;</italic>
<sub>
<italic>ref</italic>
</sub>. For neural network implementation, we have used the (<xref ref-type="bibr" rid="B10">Diehl et&#x20;al., 2015</xref>) LIF neuron feature of adaptive thresholding scheme where each neuron follows these temporal dynamics:<disp-formula id="e2">
<mml:math id="m2">
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>v</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">theta</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>&#x3b8;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>&#x2202;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(2)</label>
</disp-formula>Here, the constant <italic>&#x3b8;</italic>
<sub>0</sub> &#x3e; <italic>v</italic>
<sub>
<italic>rest</italic>
</sub>, <italic>v</italic>
<sub>
<italic>reset</italic>
</sub> and <italic>&#x3c4;</italic>
<sub>
<italic>theta</italic>
</sub> is the adaptive threshold time constant. When a neuron receives a spike, <italic>&#x3b8;</italic>(<italic>t</italic>) is increased by a constant value of <italic>&#x3b8;</italic>
<sub>&#x2b;</sub> and then decays exponentially as shown in <xref ref-type="disp-formula" rid="e2">Eq.&#x20;2</xref>.</p>
</sec>
<sec id="s2-3">
<title>2.3 Synaptic Learning Model</title>
<p>Hebbian Learning: In Hebbian learning (<xref ref-type="bibr" rid="B14">Hebb, 2005</xref>), correlated activation of pre- and post-synaptic neurons leads to the strengthening of synaptic weights between two neurons. The basic Hebbian learning rule is expressed as:<disp-formula id="e3">
<mml:math id="m3">
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>w</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>w</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>x</mml:mi>
</mml:math>
<label>(3)</label>
</disp-formula>Here &#x394;<italic>w</italic> denotes the change in synaptic weight, <italic>x</italic> refers to the array of input spikes on the neuron&#x2019;s synapses, <italic>&#x3b7;</italic> is the learning rate, and <italic>w</italic> is the synaptic weight associated with the neuron. Term <italic>y</italic> (<italic>x</italic>, <italic>w</italic>) denotes the post-synaptic activation of the neuron which is a function of the input and the weights.</p>
<sec id="s2-3-1">
<title>Spike Time Dependent Plasticity</title>
<p>Hebbian learning is often implemented as STDP (a more quantitative form). STDP is adopted as a learning rule where the synaptic strengths between two neurons are determined by their relative timing of spiking. The change in synaptic weight (&#x394;<italic>w</italic>) is represented by:<disp-formula id="e4">
<mml:math id="m4">
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>w</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="cases">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">post</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mi mathvariant="normal">f</mml:mi>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="italic">pre</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>p</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="1em"/>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mi mathvariant="normal">f</mml:mi>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="normal">&#x394;</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(4)</label>
</disp-formula>Here, <italic>&#x3b7;</italic>
<sub>
<italic>pre</italic>
</sub> and <italic>&#x3b7;</italic>
<sub>
<italic>post</italic>
</sub> represent the pre- and post-synaptic learning rates, <italic>&#x3c4;</italic>
<sub>
<italic>t</italic>
</sub> denotes the spike trace decay time constant, and &#x394;<italic>t</italic> represents the relative spike timing difference between connected neurons. When &#x394;<italic>t</italic> is close to 0, the exponential part of the equation is set very close to 1 and decays exponentially to 0 with spike trace decay time constant (<italic>&#x3c4;</italic>
<sub>
<italic>t</italic>
</sub>).</p>
</sec>
</sec>
<sec id="s2-4">
<title>2.4 Neuron Design and Implementation</title>
<p>In this work, all neuron models are implemented and analyzed on HSPICE using PTM 65&#xa0;nm technology.</p>
<sec id="s2-4-1">
<title>2.4.1 Axon Hillock Spiking Neuron Design</title>
<p>The Axon Hillock circuit (<xref ref-type="bibr" rid="B24">Mead and Ismail, 2012</xref>) (<xref ref-type="fig" rid="F2">Figure&#x20;2A</xref>) consists of an amplifier block implemented using two inverters in series (shown in dotted gray box). The input current (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) is integrated at the neuron membrane capacitance (<italic>C</italic>
<sub>
<italic>mem</italic>
</sub>), and the analog membrane voltage (<italic>V</italic>
<sub>
<italic>mem</italic>
</sub>) rises linearly until it crosses the amplifier&#x2019;s threshold. Once it reaches this point, the output (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>) switches from &#x201c;0&#x201d; to <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. This <italic>V</italic>
<sub>
<italic>out</italic>
</sub> is fed back into a reset transistor (<italic>M</italic>
<sub>
<italic>N</italic>1</sub>) and activates a positive feedback through the capacitor divider (<italic>C</italic>
<sub>
<italic>fb</italic>
</sub>). Another transistor (<italic>M</italic>
<sub>
<italic>N</italic>2</sub>), controlled by <italic>V</italic>
<sub>
<italic>pw</italic>
</sub>, determines the reset current. If reset current <inline-formula id="inf1">
<mml:math id="m5">
<mml:mo>&#x3e;</mml:mo>
<mml:mspace width="0.3333em"/>
<mml:msub>
<mml:mrow>
<mml:mi>I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, <italic>C</italic>
<sub>
<italic>mem</italic>
</sub> is discharged until it falls to the amplifier&#x2019;s threshold. This causes <italic>V</italic>
<sub>
<italic>out</italic>
</sub> to switch from <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> to &#x201c;0&#x201d;. The output remains &#x201c;0&#x201d; until the entire cycle repeats.&#x20;<xref ref-type="fig" rid="F2">Figure&#x20;2C</xref> depicts the expected results of <italic>V</italic>
<sub>
<italic>mem</italic>
</sub> and&#x20;<italic>V</italic>
<sub>
<italic>out</italic>
</sub>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>
<bold>(A)</bold> Axon Hillock circuit; <bold>(B)</bold> voltage amplifier I&#x26;F circuit; <bold>(C)</bold> expected membrane voltage and output voltage of Axon Hillock neuron; <bold>(D)</bold> expected membrane voltage of I&#x26;F neuron.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g002.tif"/>
</fig>
<p>In this paper, the value of the membrane capacitance (<italic>C</italic>
<sub>
<italic>mem</italic>
</sub>) and the feedback capacitance (<italic>C</italic>
<sub>
<italic>fb</italic>
</sub>) of 1pF are used. For experimental purposes, the input current spikes with an amplitude of 200&#xa0;nA, a spike width of 25&#xa0;ns, and a spike rate of 40&#xa0;MHz are generated through the current source (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>). The <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> of the design is set to 1&#xa0;V. <xref ref-type="fig" rid="F3">Figure&#x20;3A</xref> shows the&#x20;simulation results of the input current spikes (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) and&#x20;the corresponding membrane and the output voltage (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>).</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Simulation result of <bold>(A)</bold> Axon Hillock spike generation showing input current (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) (top plot), the membrane voltage (<italic>V</italic>
<sub>
<italic>mem</italic>
</sub>), and the output voltage (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>) (bottom plot); and <bold>(B)</bold> voltage amplifier I&#x26;F neuron spike generation showing input current (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) (top plot and zoomed-in), and the membrane voltage (<italic>V</italic>
<sub>
<italic>mem</italic>
</sub>) (bottom plot).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g003.tif"/>
</fig>
</sec>
<sec id="s2-4-2">
<title>2.4.2 Voltage Amplifier I&#x26;F Neuron Design</title>
<p>The voltage amplifier I&#x26;F circuit (<xref ref-type="bibr" rid="B33">Van Schaik, 2001</xref>) (<xref ref-type="fig" rid="F2">Figure&#x20;2B</xref>) employs a 5-transistor amplifier that offers better control over the threshold voltage of the neuron. This design allows the designer to determine an explicit threshold and an explicit refractory period. The threshold voltage (<italic>V</italic>
<sub>
<italic>thr</italic>
</sub>) of the amplifier employed is set to 0.5&#xa0;V and the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> is set to 1&#xa0;V. The neuron membrane is modeled using a 10-pF capacitance (<italic>C</italic>
<sub>
<italic>mem</italic>
</sub>) and the membrane leakage is controlled by transistor <italic>M</italic>
<sub>
<italic>N</italic>4</sub> with a gate (<italic>V</italic>
<sub>
<italic>lk</italic>
</sub>) voltage of 0.2&#xa0;V. The excitatory input current spikes (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) integrate charge over <italic>C</italic>
<sub>
<italic>mem</italic>
</sub> and the node voltage at <italic>V</italic>
<sub>
<italic>mem</italic>
</sub> rises linearly. Once <italic>V</italic>
<sub>
<italic>mem</italic>
</sub> crosses <italic>V</italic>
<sub>
<italic>thr</italic>
</sub>, the comparator output switches from &#x201c;0&#x201d; to <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. This output is fed into 2 inverters in series, where the output of the first inverter is used to pull up <italic>V</italic>
<sub>
<italic>mem</italic>
</sub> to <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> and the output of the second inverter is used to charge a second capacitor (<italic>C</italic>
<sub>
<italic>k</italic>
</sub>) of 20&#xa0;pF. The node voltage of <italic>C</italic>
<sub>
<italic>k</italic>
</sub> is fed back to a reset transistor <italic>M</italic>
<sub>
<italic>N</italic>1</sub>. When this node voltage is high enough, <italic>M</italic>
<sub>
<italic>N</italic>1</sub> is activated and <italic>V</italic>
<sub>
<italic>mem</italic>
</sub> is pulled down to &#x201c;0&#x201d; and remains LOW until <italic>C</italic>
<sub>
<italic>k</italic>
</sub> discharges below the activation voltage of <italic>M</italic>
<sub>
<italic>N</italic>1</sub>. For experimental purposes, the input current spikes with an amplitude of 200&#xa0;nA, a spike width of 25&#xa0;ns, and a time interval of 25&#xa0;ns between consecutive spikes is generated through the current source (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>). <xref ref-type="fig" rid="F2">Figure&#x20;2D</xref> depicts the expected results of <italic>V</italic>
<sub>
<italic>mem</italic>
</sub>. <xref ref-type="fig" rid="F3">Figure&#x20;3B</xref> shows the simulation results of input current spikes (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) and corresponding membrane voltage (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>).</p>
</sec>
<sec id="s2-4-3">
<title>2.4.3 SNN Current Driver Design</title>
<p>A current driver provides the input current spikes to the neuron, e.g., image input converted to current spike train. We have designed a current source based on a current mirror (<xref ref-type="fig" rid="F4">Figure&#x20;4A</xref>) where <italic>V</italic>
<sub>
<italic>GS</italic>
</sub> of <italic>M</italic>
<sub>
<italic>N</italic>2</sub> and <italic>M</italic>
<sub>
<italic>N</italic>3</sub> are equal causing both transistors to pass the same current. The sizes of the <italic>M</italic>
<sub>
<italic>N</italic>2</sub> and <italic>M</italic>
<sub>
<italic>N</italic>3</sub> transistors and the resistor (<italic>R</italic>
<sub>1</sub>) are chosen to provide a current of amplitude 200&#xa0;nA. Since the input current of the neuron is modeled as spikes, we have added the <italic>M</italic>
<sub>
<italic>N</italic>1</sub> transistor to act as a switch that is controlled by incoming voltage spikes (<italic>V</italic>
<sub>
<italic>ctr</italic>
</sub>) from other neurons.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>
<bold>(A)</bold> Current driver circuit of the SNN neurons with design details; <bold>(B)</bold> change in current driver output spike (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) amplitude with change in <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>; and <bold>(C)</bold> effect of input spike amplitude on SNN output time-to-spike for Axon Hillock neuron and voltage amplifier I&#x26;F neuron.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g004.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3 Neuron Attack Models</title>
<p>In this section we describe the power-based attacks and analyze the effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation on various critical circuit components and parameters of the previously described&#x20;SNNs.</p>
<sec id="s3-1">
<title>3.1 Attack Assumptions</title>
<p>We have investigated the power attacks under the following cases:</p>
<sec id="s3-1-1">
<title>3.1.1 Case 1: Separate Power Domains</title>
<p>We assume that the current drivers and neurons (of whole SNN) are operated on separate <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> domains. This is possible if the supply voltage of neurons, synapses, and peripherals are distinct, e.g., if the neuron and peripherals are CMOS and the synapses are based on memristors. This case enables us to study the effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> modulation on individual components.</p>
</sec>
<sec id="s3-1-2">
<title>3.1.2 Case 2 Single Power Domain</title>
<p>The entire SNN system, including current drivers and neurons, share the same <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. This situation is likely if the whole circuit is based on&#x20;CMOS.</p>
</sec>
<sec id="s3-1-3">
<title>3.1.3 Case 3: Local Power Glitching</title>
<p>The adversary has fine grain control of the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> inside a voltage domain for both separate and single power domain. For example, adversary can cause local voltage glitching using a focused laser&#x20;beam.</p>
</sec>
<sec id="s3-1-4">
<title>3.1.4 Case 4: Timed Power Glitching</title>
<p>The adversary controls the time duration of voltage glitching for both separate and single power domains. For example, the adversary modulates the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> only for a part of the SNN training period. This is a likely scenario for a black box attack where the adversary may not know the internal state of the&#x20;SNN.</p>
</sec>
</sec>
<sec id="s3-2">
<title>3.2 SNN Input Spike Corruption</title>
<p>The input current spikes of each neuron are fed using a current driver as described in <xref ref-type="sec" rid="s2-4-3">Section 2.4.3</xref>. The driver is designed with <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 1&#xa0;V and outputs SNN input current spikes of 200&#xa0;nA amplitude and 25 ns spike width. An adversary can attack a normal driver operation by modulating the&#x20;<italic>V</italic>
<sub>
<italic>DD</italic>
</sub>.</p>
<p>
<xref ref-type="fig" rid="F4">Figure&#x20;4B</xref> shows the effect of modulating the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> from 0.8 to 1.2&#xa0;V (corresponding to a &#x2212;/&#x2b; 20% change). The corresponding output spike amplitude ranges from 136&#xa0;nA for 0.8<italic>V</italic>
<sub>
<italic>DD</italic>
</sub> (&#x2212;32% change) to 264&#xa0;nA for 1.2<italic>V</italic>
<sub>
<italic>DD</italic>
</sub> (&#x2b;32% change). We subjected our neuron designs under these input spike amplitude modulations while keeping the input spiking rate constant at 40&#xa0;MHz. <xref ref-type="fig" rid="F4">Figure&#x20;4C</xref> shows the effect on output spike rate for the Axon Hillock neuron where the time-to-spike (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>) becomes faster by 24.7% under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 1.2&#xa0;V and <italic>I</italic>
<sub>
<italic>in</italic>
</sub> &#x3d; 264&#xa0;<italic>nA</italic> and becomes slower by 53.7% under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 0.8&#xa0;V and <italic>I</italic>
<sub>
<italic>in</italic>
</sub> &#x3d; 136&#xa0;<italic>nA</italic>. Similarly, <xref ref-type="fig" rid="F4">Figure&#x20;4C</xref> also shows the effect on output spike rate for the voltage amplifier I&#x26;F neuron where the time-to-spike (<italic>V</italic>
<sub>
<italic>out</italic>
</sub>) becomes faster by 6.7% under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 1.2&#xa0;V and <italic>I</italic>
<sub>
<italic>in</italic>
</sub> &#x3d; 264&#xa0;<italic>nA</italic> and becomes slower by 14.5% under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 0.8&#xa0;V and <italic>I</italic>
<sub>
<italic>in</italic>
</sub> &#x3d; 136&#xa0;<italic>nA</italic>.</p>
</sec>
<sec id="s3-3">
<title>3.3 SNN Threshold Manipulation</title>
<p>The adversary can also corrupt normal SNN operation using the externally supplied <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> which can modulate the SNN&#x2019;s membrane threshold voltage. In the ideal condition, the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> is 1&#xa0;V and the threshold voltage of both the Axon Hillock neuron and the I&#x26;F neuron are designed to be 0.5&#xa0;V. <xref ref-type="fig" rid="F5">Figure&#x20;5A</xref> shows that the membrane threshold voltage changes with <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. In the case of the Axon Hillock neuron the change in threshold ranges from &#x2212;17.91% for <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 0.8<italic>V</italic> to &#x2b;16.76% for <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 1.2<italic>V</italic>. When <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> is modified, the switching threshold of the inverters in the Axon Hillock neuron is also proportionally affected. A lower (higher) <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> lowers (increases) the switching threshold of the inverters and leads to a faster (slower) output spike. Similarly, the change in threshold ranges from &#x2212;18.01% to &#x2b;17.14% when <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> is swept from 0.8 to 1.2&#xa0;V for the voltage amplifier I&#x26;F neuron. Note that the change in threshold for the I&#x26;F neuron is due to <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> signal (<xref ref-type="fig" rid="F5">Figure&#x20;5A</xref>) which is derived using a simple resistor-based voltage division of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. Therefore, <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> scales linearly with&#x20;<italic>V</italic>
<sub>
<italic>DD</italic>
</sub>.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>
<bold>(A)</bold> Change in SNN membrane threshold with change in <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>; effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change on SNN output time-to-spike for <bold>(B)</bold> Axon Hillock neuron; <bold>(C)</bold> voltage amplifier I&#x26;F neuron.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g005.tif"/>
</fig>
<p>The change in membrane threshold modulates the output spike rate of the affected SNN neurons. <xref ref-type="fig" rid="F5">Figures 5B,C</xref> show the change in time-to-spike under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation while the input spikes (<italic>I</italic>
<sub>
<italic>in</italic>
</sub>) to the neuron are held at a constant amplitude of 200&#xa0;nA and a rate of 40&#xa0;MHz. The time-to-spike for the Axon Hillock ranges from 17.91% faster to 16.76% slower. Similarly, the time-to-spike for I&#x26;F neuron ranges from 17.05% faster to 23.53% slower.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Analysis of Power Attacks on SNN</title>
<p>This section describes the effect of power-oriented attacks on the image classification accuracy under the attack assumptions from <xref ref-type="sec" rid="s3-1">Section&#x20;3.1</xref>.</p>
<sec id="s4-1">
<title>4.1 Experimental Setup</title>
<p>We have implemented the Diehl and Cook SNN (<xref ref-type="bibr" rid="B10">Diehl et&#x20;al., 2015</xref>) using the BindsNET (<xref ref-type="bibr" rid="B13">Hazan et&#x20;al., 2018</xref>) network library with PyTorch Tensor to test the effect of power-based attacks. The SNN is implemented with 3 neuron layers (<xref ref-type="fig" rid="F6">Figure&#x20;6</xref>), namely input layer, excitatory layer (EL), and inhibitory layer (IL). We employ this SNN for digit classification of the MNIST dataset which consists of digit images of pixel dimension 28&#x20;&#xd7; 28. Each input image is converted to Poisson-spike trains and fed to the excitatory neurons in an all-to-all connection, where each input spike is fed to each excitatory neuron. The excitatory neurons are 1-to-1 connected with the inhibitory neurons (<xref ref-type="fig" rid="F6">Figure&#x20;6</xref>). Each neuron in the IL is in turn connected to all the neurons in the EL, except the one it received a connection from. The architecture performs supervised learning. For our experiments, the EL and IL have 100 neurons each and all experiments are conducted on 1,000&#x20;Poisson-encoded training images with fixed learning rates of 0.000&#x2009;1 and 0.01 for pre-synaptic and post-synaptic events, respectively. The batch size is set to 32 and training samples are iterated only once as configured in (<xref ref-type="bibr" rid="B13">Hazan et&#x20;al., 2018</xref>). Other key design parameters for the implemented SNN are shown in <xref ref-type="table" rid="T1">Table&#x20;1</xref>. Additional details on the neuron layers, learning method, and SNN parameters can be found in (<xref ref-type="bibr" rid="B13">Hazan et&#x20;al., 2018</xref>). The baseline classification accuracy for attack-free SNN is 75.92% with 60&#xa0;K training images.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Implemented 3-layer SNN (<xref ref-type="bibr" rid="B10">Diehl et&#x20;al., 2015</xref>).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g006.tif"/>
</fig>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>SNN simulation parameters.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Parameters</th>
<th align="center">Value</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">Spike trace decay time constant, <italic>&#x3c4;</italic>
<sub>trace</sub>
</td>
<td align="center">20&#xa0;mS</td>
</tr>
<tr>
<td align="left">Resting potential, V<sub>rest</sub>
</td>
<td align="center">&#x2212;65&#xa0;mV (EL), &#x2212;60&#xa0;mV (IL)</td>
</tr>
<tr>
<td align="left">Threshold voltage potential, <italic>V</italic>
<sub>thr</sub>
</td>
<td align="center">&#x2212;52&#xa0;mV (EL), &#x2212;40&#xa0;mV (IL)</td>
</tr>
<tr>
<td align="left">Membrane reset potential, V<sub>reset</sub>
</td>
<td align="center">&#x2212;60&#xa0;mV (EL), &#x2212;45&#xa0;mV (IL)</td>
</tr>
<tr>
<td align="left">Refractory period, <italic>&#x3b4;</italic>
<sub>ref</sub>
</td>
<td align="center">5&#xa0;mS</td>
</tr>
<tr>
<td align="left">Adaptive threshold time constant, <italic>&#x3c4;</italic>
<sub>theta</sub>
</td>
<td align="center">10<sup>7</sup>&#xa0;ms</td>
</tr>
<tr>
<td align="left">Adaptive threshold voltage increment, <italic>&#x3b8;</italic>
</td>
<td align="center">0.05</td>
</tr>
<tr>
<td align="left">Post-synaptic learning rate, <italic>&#x3b7;</italic>
<sub>post</sub>
</td>
<td align="center">10<sup>&#x2013;2</sup>
</td>
</tr>
<tr>
<td align="left">Pre-synaptic learning rate, <italic>&#x3b7;</italic>
<sub>pre</sub>
</td>
<td align="center">10<sup>&#x2013;4</sup>
</td>
</tr>
<tr>
<td align="left">Number of neurons (<italic>n</italic>)</td>
<td align="center">100 (EL), 100 (IL)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4-2">
<title>4.2 Input Spike Corruption</title>
<p>In <xref ref-type="sec" rid="s3-2">Section 3.2</xref>, it is shown that the adversary can manipulate the input spike amplitudes for the SNN neurons. This in turn changes the membrane voltage by a different rate for the same number of input spikes. This manipulation of the rate of change of membrane voltage changes the time-to-spike for the neuron (as shown in <xref ref-type="fig" rid="F4">Figure&#x20;4C</xref>).</p>
<sec id="s4-2-1">
<title>Attack 1</title>
<p>In order to translate this effect to our BindsNET SNN implementation, we have modified the rate of change of the neuron&#x2019;s membrane voltage using variable <italic>theta</italic> which specifies the voltage change in the neuron membrane for each input spike. <xref ref-type="fig" rid="F7">Figure&#x20;7</xref> shows the corresponding change in MNIST digit classification accuracy. Under the worst case <italic>theta</italic> change of &#x2212;30%, classification accuracy decreases by 1.9%. Note that this is a <italic>white box</italic> attack since the adversary requires the location of the current drivers within the SNN (possible by invasive reverse engineering of a chip) to induce the localized&#x20;fault.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Effect of current driver corruption (Attack 1) on MNIST classification accuracy.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g007.tif"/>
</fig>
</sec>
</sec>
<sec id="s4-3">
<title>4.3 SNN Threshold Manipulation</title>
<p>The key parameters that are used for threshold manipulation are threshold voltage potential (<italic>&#x3b8;</italic>
<sub>0</sub>) and membrane reset potential (<italic>V</italic>
<sub>
<italic>reset</italic>
</sub>). Using our power-based attacks the threshold can be manipulated in two possible ways:</p>
<sec id="s4-3-1">
<title>Method 1 (Threshold Range Manipulation)</title>
<p>
<xref ref-type="table" rid="T1">Table&#x20;1</xref> shows that <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> &#x3d; &#x2212;52&#xa0;mV (EL), &#x2212;40&#xa0;mV (IL) and <italic>V</italic>
<sub>
<italic>reset</italic>
</sub> &#x3d; &#x2212;60&#xa0;mV (EL), &#x2212;45&#xa0;(IL). From these values we can calculate that the baseline threshold ranges are 8&#xa0;mV (EL) and 5&#xa0;mV (IL), respectively. Using <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation it is possible to manipulate this threshold range. In Method 1, we manipulate the threshold range of neurons in each layer from &#x2212;50% to &#x2b;50% to thoroughly analyze the effects of power attacks in SNN classification&#x20;tasks.</p>
</sec>
<sec id="s4-3-2">
<title>Method 2 (Threshold Value Manipulation)</title>
<p>Here we leverage the power-based attacks to manipulate only the value of the threshold voltage potential (<italic>V</italic>
<sub>
<italic>thr</italic>
</sub>). In <xref ref-type="sec" rid="s3-3">Section 3.3</xref>, it is shown that the adversary can manipulate the membrane threshold voltages of the SNN neurons from &#x2212;20% to &#x2b;20% which can affect classification accuracy. The change in threshold value has different effects on neurons from the EL vs. IL. Therefore, we analyze the individual effect of each neuron layer&#x2019;s on classification accuracy. Finally, we analyze the response for all the layers on the classification accuracy. Note, Attacks 2 to 6 are <italic>white box</italic> attacks since the adversary requires the location of the individual SNN layers (can be obtained from the layout) to induce the localized faults.</p>
</sec>
<sec id="s4-3-3">
<title>Attack 2</title>
<p>In this attack, we implement Method 1 and subject all the layers of neurons to the same membrane threshold range change. <xref ref-type="fig" rid="F8">Figure&#x20;8A</xref> shows the variation in accuracy as 60&#xa0;K samples are trained with the neuron threshold manipulation. It is seen that the classification accuracy falls as the membrane threshold range of both the layers hits &#x2b;30% for all periods of training progression. <xref ref-type="fig" rid="F8">Figure&#x20;8A</xref> depicts the final average classification accuracy after training 60&#xa0;K samples under threshold range manipulation. A worst case accuracy degradation of &#x2212;2.7% below baseline accuracy is observed when the membrane threshold is increased by 30%. The increased threshold range causes a neuron to take longer to build up the membrane potential to fire an output spike. Therefore, the relative spiking time difference (&#x394;<italic>t</italic>) between 2 connected neurons increases and the corresponding change in synaptic weight (&#x394;<italic>w</italic>) during each update proportionally decreases. This, in turn, means that SNN with higher threshold ranges would require longer (more training) to achieve the same accuracy as SNN with smaller threshold ranges.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>
<bold>(A)</bold> Progression of classification accuracy for 60&#xa0;K training samples under threshold range variation; and <bold>(B)</bold> final classification accuracy for various <italic>Vth</italic> showing worst-case degradation (Attack 2).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g008.tif"/>
</fig>
</sec>
<sec id="s4-3-4">
<title>Attack 3</title>
<p>In this case, we subject only the EL to membrane Method 2 threshold variation to study its individual effect on classification accuracy. This attack is possible when (1) each neuron layer has their own voltage domain and the adversary injects a laser-induced fault, (2) neuron layers share voltage domain but the local fault injection in one layer does not propagate to other layers due to the capacitance of the power rail. Various fraction of neurons in this layer, ranging from 0 to 100% are subject to -20% to &#x2b;20% threshold change This analysis is performed to model the situation when an adversary has fine grain control of the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> inside a voltage domain, e.g., using local voltage glitching attack that affects only a section of neurons. This is possible in systems that have thousands of neurons per layer that may be physically isolated due to interleaving synapse arrays. <xref ref-type="fig" rid="F9">Figure&#x20;9A</xref> shows the corresponding change in the classification accuracy. It is noted that classification accuracy is equal to or better than the baseline accuracy for threshold changes as long as &#x2264; 90% of the layer is affected. For the worst case threshold change of &#x2212;20%, the classification accuracy degrades by 7.32% when 100% of the EL is affected. In summary, attacking the EL alone has a relatively low impact on the output accuracy. This is intuitive since the effect of any corruption in the EL can be recovered in the following&#x20;IL.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>Classification accuracy trend with SNN membrane threshold for <bold>(A)</bold> excitatory layer only (Attack 3); <bold>(B)</bold> inhibitory layer only (Attack 4); and <bold>(C)</bold> both excitatory layer and inhibitory layer (Attack 5).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g009.tif"/>
</fig>
</sec>
<sec id="s4-3-5">
<title>Attack 4</title>
<p>In this attack, we subject only the IL to the membrane threshold change. Various fraction of neurons in this layer ranging from 0 to 100% are subject to &#x2212;20% to &#x2b;20% threshold change. <xref ref-type="fig" rid="F9">Figure&#x20;9B</xref> shows the corresponding change in classification accuracy. It is noted that classification accuracy degrades below the baseline accuracy for 3 out of 4 cases of threshold change and for all fractions of IL affected. A worst-case degradation of 84.52% below the baseline accuracy (observed at &#x2212;20% threshold change at 100% of IL affected) is noted. In summary, attacking the IL has a more significant effect on output accuracy compared to attacking the EL alone. This is understandable since IL is the final layer before the output. Therefore, any loss in learning cannot be recovered.</p>
</sec>
<sec id="s4-3-6">
<title>Attack 5</title>
<p>In this attack, we subject 100% of both the EL and the IL to the same membrane threshold change. <xref ref-type="fig" rid="F9">Figure&#x20;9C</xref> shows the variation in accuracy with the threshold for both the layers of neurons. It is seen that the classification accuracy falls sharply as the membrane threshold of both the layers decreases below the baseline. A worst case accuracy degradation of &#x2212;85.65% below baseline accuracy is observed when the membrane threshold is reduced by&#x20;20%.</p>
</sec>
<sec id="s4-3-7">
<title>Attack 6</title>
<p>In this attack, we vary the timing of threshold manipulation. We consider the worst-case threshold corruption for the three cases of EL only, IL only, and EL &#x2b; IL for various time duration ranging from 0 to 100% of the training phase. <xref ref-type="fig" rid="F10">Figures 10A&#x2013;C</xref> show the corresponding effect on the classification accuracy. While maximum accuracy degradation is observed when 100% of the training phase is affected, timed attacks for even 25% of the training phase show accuracy degradation of 32% (EL &#x2b; IL), 30% (IL only), and 28% (EL only).</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>Classification accuracy trend during timed threshold manipulation (Attack 6) of <bold>(A)</bold> excitatory layer only; <bold>(B)</bold> inhibitory layer only; and <bold>(C)</bold> both excitatory layer and inhibitory&#x20;layer.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g010.tif"/>
</fig>
</sec>
</sec>
<sec id="s4-4">
<title>4.4 Input Spike Corruption and Threshold Manipulation</title>
<sec id="s4-4-1">
<title>Attack 7</title>
<p>This is a <italic>black box</italic> attack where the adversary does not need to know the internal architecture of the current driver or the SNN neurons. Here we assume that the power supply is shared among all the components of the SNN system, including the current drivers and all of the neuron layers. Manipulating the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> changes both membrane voltage per spike (<italic>theta</italic>) and the threshold voltages (<italic>V</italic>
<sub>
<italic>thr</italic>
</sub>) (Method 1) of the SNN neurons. <xref ref-type="fig" rid="F11">Figure&#x20;11</xref> shows that the worst case accuracy degradation is &#x2212;84.93%.</p>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption>
<p>Change in classification accuracy with <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change for entire system (neurons and peripherals) (Attack 7).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g011.tif"/>
</fig>
</sec>
</sec>
<sec id="s4-5">
<title>4.5 Summary of Power Attack Analysis</title>
<p>From our analysis, we conclude following:</p>
<sec id="s4-5-1">
<title>4.5.1 SNN Assets</title>
<p>These include: (1) spike rate and amplitude, (2) neuron membrane threshold, and (3) membrane voltage change per spike. Other assets (not studied in this paper) are strength of synaptic weights between neurons and the SNN learning&#x20;rate.</p>
</sec>
<sec id="s4-5-2">
<title>4.5.2 SNN Vulnerabilities</title>
<p>
<italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation (1) generation of spikes of lower/higher amplitude than nominal value by the neuron&#x2019;s input current driver, (2) lowers/increases neuron&#x2019;s membrane threshold. Both vulnerabilities cause affected neurons to spike faster/slower.</p>
</sec>
<sec id="s4-5-3">
<title>4.5.3 Attack Models</title>
<p>Manipulation of global and local fine-grained power supply corrupts critical training parameters. Attacks not covered in this paper are (1) generation of adversarial input samples to cause misclassification, (2) fault injection into synaptic weights, and (3) noise injection in input samples to attack specific neurons.</p>
</sec>
</sec>
</sec>
<sec id="s5">
<title>5 Defenses Against Power-Based SNN Attacks</title>
<p>In <xref ref-type="sec" rid="s2-3">Section 2.3</xref>, the learning rule for the implemented architecture is explained. The three key designer-controlled parameters include the post- and pre-synaptic learning rates (<italic>&#x3b7;</italic>
<sub>
<italic>pre</italic>
</sub>/<italic>&#x3b7;</italic>
<sub>
<italic>post</italic>
</sub>), the spike trace decay time constant (<italic>&#x3c4;</italic>
<sub>
<italic>t</italic>
</sub>), and the number of neurons (<italic>n</italic>) used per layer. In this section, we analyze the effect of these parameters on SNN classification accuracy under the fault-free (i.e.,&#x20;baseline) and faulty conditions. The STDP parameters can be tuned by modifying the shape of the pre- and post-synaptic spikes as shown in (<xref ref-type="bibr" rid="B29">Saudargiene et&#x20;al., 2004</xref>). These defenses that use design choices (in <xref ref-type="sec" rid="s5-1">section 5.1</xref>&#x2013;<xref ref-type="sec" rid="s5-3">5.3</xref>) are effective against Attack 1 and Attack 2 where the accuracy degradation is caused due to input spike corruption and threshold range manipulation. Furthermore, we propose multiple circuit-level modifications and logic additions (in <xref ref-type="sec" rid="s5-4">Section 5.4</xref>&#x2013;<xref ref-type="sec" rid="s5-6">5.6</xref>) that defend against all proposed attacks (Attack 1&#x2013;7).</p>
<sec id="s5-1">
<title>5.1 Impact of STDP Synaptic Learning Rate</title>
<p>The baseline synaptic learning rates of the SNN implemented (shown in <xref ref-type="table" rid="T1">Table&#x20;1</xref>) are 10<sup>&#x2013;2</sup> and 10<sup>&#x2013;4</sup> for <italic>&#x3b7;</italic>
<sub>
<italic>post</italic>
</sub> and <italic>&#x3b7;</italic>
<sub>
<italic>pre,</italic>
</sub> respectively. We subject the neurons in the implemented SNN with adversarial threshold range variation of &#x2212;50% to &#x2b;50%. The learning rates of the SNN is varied by <inline-formula id="inf2">
<mml:math id="m6">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:math>
</inline-formula>X to 2X under the adversarial attack to determine its effect on classification accuracy. <xref ref-type="fig" rid="F12">Figure&#x20;12A</xref> depicts the change in STDP learning curve with learning rate. It is seen that increasing (decreasing) <italic>&#x3b7;</italic> causes a greater (lower) change in synaptic weight (&#x394;<italic>w</italic>) for the same spike timing difference (&#x394;<italic>t</italic>). <xref ref-type="fig" rid="F13">Figure&#x20;13</xref> depicts the final average classification accuracy after training the SNN with 60&#xa0;K samples under different <italic>&#x3b7;</italic> and threshold (<italic>Vth</italic>) values. The highest accuracy is observed for SNN with learning rate as 0.5<italic>&#x3b7;</italic> which recovers the baseline accuracy by 0.83%. The lowest accuracy is observed for SNN with learning rate as 2<italic>&#x3b7;</italic> which further degrades the accuracy by 3.61%. Lowering the learning rate proportionally reduces the change in weight (&#x394;<italic>w</italic>) per synaptic update minimizing the effect of adversarial power-based attacks on STDP learning. Similarly, increasing <italic>&#x3b7;</italic> causes a higher &#x394;<italic>w</italic> and leads to a more pronounced effect on the final classification accuracy. For Attacks 1 and 2, where accuracy loss of 1.9 and 2.7% was observed, this method recovers accuracy degradation by 43 and 31%, respectively.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption>
<p>Change in STDP learning curve with <bold>(A)</bold> learning rate (<italic>&#x3b7;</italic>); and <bold>(B)</bold> trace decay constant (<italic>&#x3c4;</italic>).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g012.tif"/>
</fig>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption>
<p>Effect of learning rate (<italic>&#x3b7;</italic>) on MNIST classification accuracy after training 60&#xa0;K samples under adversarial threshold variation.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g013.tif"/>
</fig>
</sec>
<sec id="s5-2">
<title>5.2 Impact of STDP Synaptic Trace Decay Constant</title>
<p>The baseline synaptic trace decay constant (<italic>&#x3c4;</italic>) (<xref ref-type="table" rid="T1">Table&#x20;1</xref>) is 20&#xa0;ms. We vary <italic>&#x3c4;</italic> from 0.25X to 1.5X under our adversarial power attack to determine its effect on classification accuracy. <xref ref-type="fig" rid="F12">Figure&#x20;12B</xref> shows that increasing (decreasing) <italic>&#x3c4;</italic> causes a shallower (steeper) slope to the STDP curve and correspondingly a lower (higher) change in synaptic weight (&#x394;<italic>w</italic>) for the same spike timing difference (&#x394;<italic>t</italic>). <xref ref-type="fig" rid="F14">Figure&#x20;14</xref> shows the final average classification accuracy after training the SNN with 60&#xa0;K samples under different <italic>&#x3c4;</italic> and threshold (<italic>Vth</italic>) values. It is seen that the highest accuracy is observed for SNN with trace decay constant of 0.25<italic>&#x3c4;</italic> which improves average classification accuracy by 0.81%. The lowest accuracy is observed for SNN with trace decay constant of 1.25<italic>&#x3c4;</italic> which further degrades classification accuracy by &#x2212;0.26%. Lowering the trace decay constant causes a steeper STDP curve and effectively reduces the window of spike timing difference (&#x394;<italic>t</italic>) within which the synaptic weights are updated. Therefore, lowering the frequency of updates correspondingly minimizes the effect of power-based attacks on STDP learning. Similarly, increasing <italic>&#x3c4;</italic> causes a wider update window and leads to a more pronounced effect on final classification accuracy. For Attack 1 and 2, where accuracy loss of 1.9 and 2.7% is observed, this method recovers accuracy degradation by 42 and 30%, respectively.</p>
<fig id="F14" position="float">
<label>FIGURE 14</label>
<caption>
<p>Change in STDP learning curve with trace decay constant (<italic>&#x3c4;</italic>).</p>
</caption>
<graphic xlink:href="fnano-03-801999-g014.tif"/>
</fig>
</sec>
<sec id="s5-3">
<title>5.3 Impact of Number of Neurons per Layer</title>
<p>In the baseline SNN implementation, we utilized 100 neurons (<italic>n</italic>) in the IL and EL each. We increase the range of neurons per layer to include <italic>n</italic> &#x3d; 50, 150, 225, and 400 to study its effect on classification accuracy under adversarial threshold variation. <xref ref-type="fig" rid="F15">Figure&#x20;15</xref> shows the final classification accuracy observed under different <italic>n</italic> and <italic>Vth</italic>. Ideally, a greater number of neurons increases the classification accuracy. Here we see that <italic>n</italic>&#x20;&#x3d; 150 maximizes the accuracy under most of the <italic>Vth</italic> cases and improves average classification accuracy by 0.94%. Further increasing <italic>n</italic> to 225 and 400 leads to a degradation in accuracy. The worst case is observed when <italic>n</italic> &#x3d; 400, where the average accuracy degrades by 17.18%. This can be attributed to the fact that a higher number of neurons under attack have a more pronounced negative effect on SNN training. Ideally, the designer should increase <italic>n</italic> only up to a point where the increase in output accuracy caused by a higher <italic>n</italic> is greater than the accuracy degradation faced by a greater number of neurons under attack. For Attack 1 and 2, where accuracy loss of 1.9 and 2.7% is observed, this method recovers accuracy degradation by 47 and 34%, respectively.</p>
<fig id="F15" position="float">
<label>FIGURE 15</label>
<caption>
<p>Impact on number of neurons per SNN layer on classification accuracy under adversarial <italic>Vth</italic> manipulation.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g015.tif"/>
</fig>
</sec>
<sec id="s5-4">
<title>5.4 Robust Current Driver Design</title>
<p>We propose a current driver that produces neuron input spikes of constant amplitude (<xref ref-type="fig" rid="F16">Figure&#x20;16A</xref>). Here the negative input terminal of the op-amp is forced to a reference voltage that leads the positive terminal to be virtually connected to the reference voltage (<italic>V</italic>
<sub>
<italic>Ref</italic>
</sub>). The current through <italic>M</italic>
<sub>
<italic>P</italic>1</sub> transistor is <italic>V</italic>
<sub>
<italic>Ref</italic>
</sub>/<italic>R</italic>
<sub>1</sub> and the negative feedback of the amplifier forces the gate voltage of <italic>M</italic>
<sub>
<italic>P</italic>1</sub> to satisfy the current equation of the transistor. Since <italic>V</italic>
<sub>
<italic>GS</italic>
</sub> and <italic>Vth</italic> of <italic>M</italic>
<sub>
<italic>P</italic>1</sub> and <italic>M</italic>
<sub>
<italic>P</italic>2</sub> transistors are the same, <italic>M</italic>
<sub>
<italic>P</italic>2</sub> passes the same current as <italic>M</italic>
<sub>
<italic>P</italic>1</sub>. Note, we have used long channel transistors to reduce the effect of channel length modulation. The power overhead incurred for the proposed robust current driver compared to the unsecured version is 3%. Note that the area overhead of the robust driver is negligible compared to the area of unsecured driver since the neuron capacitors occupy the majority of the&#x20;area.</p>
<fig id="F16" position="float">
<label>FIGURE 16</label>
<caption>
<p>
<bold>(A)</bold> Robust SNN current driver (constant output spike amplitude); <bold>(B)</bold> comparator designed and implemented in the Axon Hillock neuron to mitigate threshold variation.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g016.tif"/>
</fig>
</sec>
<sec id="s5-5">
<title>5.5 Resiliency to Threshold Voltage Variation</title>
<sec id="s5-5-1">
<title>5.5.1 Voltage Amplifier I&#x26;F Neuron</title>
<p>In order to prevent <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> from being corrupted due to <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change, it can be generated using a bandgap voltage reference that produces a constant voltage irrespective of power and temperature variations. A bandgap circuit is proposed in (<xref ref-type="bibr" rid="B28">Sanborn et&#x20;al., 2007</xref>) that generates a constant <italic>V</italic>
<sub>
<italic>ref</italic>
</sub> signal with an output variation of&#x20;&#xb1; 0.56% for supply voltages ranging from 0.85 to 1&#xa0;V at room temperature. A similar design can be used for our proposed I&#x26;F neuron that requires a constant external <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> signal. Since the <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> variation (&#xb1;0.56%) under <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation is negligible, the classification accuracy degradation reduces to &#x223c;0%. For our experimental 100-neuron (per layer) implementation, the area overhead incurred by the bandgap circuit is 65%. But this can be significantly reduced if the bandgap circuit is shared with other components of the chip and if the SNNs are implemented with tens of thousands of neurons as required by various applications.</p>
</sec>
<sec id="s5-5-2">
<title>5.5.2 Axon Hillock Neuron</title>
<sec id="s5-5-2-1">
<title>Comparator implementation</title>
<p>We replace the first inverter in the Axon Hillock neuron with a comparator that employs <italic>V</italic>
<sub>
<italic>thr</italic>
</sub> generated by a bandgap circuit (<xref ref-type="bibr" rid="B28">Sanborn et&#x20;al., 2007</xref>) as the reference voltage to eliminate the effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> variation on the inverter switching threshold. The rest of the design remains the same. <xref ref-type="fig" rid="F16">Figure&#x20;16B</xref> shows the implemented comparator which ensures that the threshold voltage is not determined by the sizing of the inverter transistors or the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. Instead, it depends on the input biasing of the proposed design. The IN&#x2b; and IN- bias is set to 600&#xa0;mV and <italic>V</italic>
<sub>
<italic>B</italic>
</sub> is set to 400&#xa0;mV. The power overhead incurred is 11% and the area overhead is negligible since the 1&#xa0;pF capacitors occupy a majority of the neuron&#x20;area.</p>
</sec>
<sec id="s5-5-2-2">
<title>Neuron transistor sizing</title>
<p>In the case of the Axon Hillock neuron (<xref ref-type="fig" rid="F2">Figure&#x20;2A</xref>), the membrane threshold is determined by the <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> and the design of the first inverter (transistors <italic>M</italic>
<sub>
<italic>P</italic>1</sub> and <italic>M</italic>
<sub>
<italic>N</italic>3</sub>). Simulations indicate that classification accuracy is affected mostly by lowering the membrane threshold as shown in <xref ref-type="fig" rid="F9">Figure&#x20;9C</xref>. We increased the sizing of the PMOS transistor <italic>M</italic>
<sub>
<italic>P</italic>1</sub> to limit the threshold change due to <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. <xref ref-type="fig" rid="F16">Figure&#x20;16C</xref> shows that increasing the W/L ratio mitigates the reduction in threshold changes under lower <italic>V</italic>
<sub>
<italic>DD</italic>
</sub>. At 0.8&#xa0;V, the threshold change observed for W/L ratio of 32:1 is &#x2212;5.23% compared to &#x2212;18.01% for the baseline sizing. The corresponding degradation in classification accuracy at <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 0.8<italic>V</italic> is only 3.49% which is a significant improvement compared to the 85.65% degradation observed previously. At <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> &#x3d; 1.2<italic>V</italic>, the threshold change increases by 3.2% for W/L ratio of 32:1 and the corresponding accuracy degradation only increases by 1.4%. For the upsized neuron, the power overhead observed is 25% while the area overhead is negligible since the majority of the neuron area is occupied by the two 1&#xa0;pF capacitors that remain unchanged in the new design.</p>
</sec>
</sec>
</sec>
<sec id="s5-6">
<title>5.6 Detection of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> Manipulation</title>
<p>In addition to robust neuron design, we also propose a technique to voltage glitching attack directed at an individual neuron layer. This is done by introducing a dummy neuron within each neuron layer (shown in <xref ref-type="fig" rid="F17">Figure&#x20;17A</xref>). In our design, the input of the dummy neuron is connected to a current driver that constantly drives spiking inputs of 200&#xa0;nA amplitude and spike width of 100&#xa0;ns. The spikes repeat every 200&#xa0;ns and do not depend on the spiking of the neurons from the previous layer. Under ideal conditions, the number of output spikes for a fixed sampling period for each dummy neuron should be identical. <xref ref-type="fig" rid="F17">Figure&#x20;17B</xref> shows the effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change on the dummy neuron&#x2019;s output for both the I&#x26;F and AH neurons over a sampling period of 100&#xa0;ms. It is seen that for both neurons, the number of dummy output spikes differs by <inline-formula id="inf3">
<mml:math id="m7">
<mml:mo>&#x2265;</mml:mo>
<mml:mn>10</mml:mn>
<mml:mi>%</mml:mi>
</mml:math>
</inline-formula> as compared to the baseline. Note that this method is only effective against localized <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change. For the SNN implemented in <xref ref-type="sec" rid="s4">Section 4</xref>, the area and power overhead for the proposed dummy neuron detection mechanism is &#x223c;1%&#x20;each.</p>
<fig id="F17" position="float">
<label>FIGURE 17</label>
<caption>
<p>
<bold>(A)</bold> <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> change detection using dummy neuron; and <bold>(B)</bold> effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> on dummy neuron output.</p>
</caption>
<graphic xlink:href="fnano-03-801999-g017.tif"/>
</fig>
</sec>
</sec>
<sec id="s6">
<title>6 Discussion</title>
<sec id="s6-1">
<title>6.1 Extension to Other Neuromorphic Materials</title>
<p>In this study, we have analyzed the impact of power-based attacks on integrate-and-fire CMOS-based neurons, that are most commonly employed for contemporary SNN architectures. But each CMOS-based neuron requires tens of transistors and therefore incurs a large area and a high power consumption. Neurons based on emerging technology such as memristors, ferroelectric devices, and phase change memories can address the above challenges. Integrate-and-fire neurons using memristors have been proposed (<xref ref-type="bibr" rid="B25">Mehonic and Kenyon, 2016</xref>) and (<xref ref-type="bibr" rid="B20">Lashkare et&#x20;al., 2018</xref>) where short voltage pulses (input spikes) are employed to increase the conductance of the memristor device. When the conductance reaches a critical value (threshold), the neuron fires a spike, and the conductance is reset. Varying the supply voltage would cause the amplitude of the input spikes to increase/decrease and correspondingly cause the neuron to fire faster/slower. Once the neuron fires, the conductance is reset using a reset pulse that is also <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> dependent. Varying the supply voltage would also lead to improper reset operation. Multiple works (<xref ref-type="bibr" rid="B27">Mulaosmanovic et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B8">Chen et&#x20;al., 2019</xref>; <xref ref-type="bibr" rid="B11">Dutta et&#x20;al., 2019</xref>) have proposed ferroelectric devices for neuromorphic computing. In (<xref ref-type="bibr" rid="B27">Mulaosmanovic et&#x20;al., 2018</xref>) a controlled electric field is applied to reversibly tune the polarization state of the ferroelectric material. When a series of short voltage pulses are applied consecutively, it causes an incremental nucleation of nanodomains in the ferroelectric layer. When a critical number of nanodomains are nucleated, it leads to an abrupt polarization reversal which corresponds to the neuron firing. It is shown that the rate of nucleation depends on the amplitude and duration of the input voltage pulses. Therefore, the proposed power-based attacks can corrupt the spiking rate and inject faults in ferroelectric neurons as well. In the case of phase change memories (PCM), the effective thickness of the amorphous region of the chalcogenide can be considered equivalent to the membrane potential of a neuron. In (<xref ref-type="bibr" rid="B30">Sebastian et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B32">Tuma et&#x20;al., 2016</xref>), it is shown that the amorphous region can be grown precisely by controlling the input voltage pulse. Consecutive voltage pulses can allow controlled crystallization and ultimately leads to an abrupt change in PCM conductance which corresponds to the neuron firing. It is also shown that the firing rate of the PCM neurons can be controlled by manipulating the amplitude of the voltage pulses. Therefore, our power-based attacks can corrupt the spiking rate and inject faults in PCM-based neurons as&#x20;well.</p>
</sec>
<sec id="s6-2">
<title>6.2 Extension to Other Neural Network Architectures</title>
<p>Although this work analyzes the impact of power-based fault injection attacks on SNNs, these attacks can be extended to other NNAs as well. Very limited research has been conducted on physical attacks (i.e.,&#x20;power-based) on traditional NNAs res such as DNNs. In (<xref ref-type="bibr" rid="B6">Breier et&#x20;al., 2018</xref>) and (<xref ref-type="bibr" rid="B16">Hou et&#x20;al., 2020</xref>), the authors study physical fault injection attacks into the hidden layers of DNNs using laser injection techniques to demonstrate image misclassification. In (<xref ref-type="bibr" rid="B4">Benevenuti et&#x20;al., 2018</xref>), the authors characterize each network layer of an ANN under a laser beam by placing them separately on an FPGA floorplan. The authors demonstrate significant degradation in classification accuracy under these laser-based attacks. The proposed power-based attacks can be extended to other types of ANNs by analyzing the effect of <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> variation on the operation of neurons. The ANN can then be implemented with these neurons under attack and the corresponding accuracy change due to their faulty behavior can be determined. This analysis can be the subject of future&#x20;study.</p>
</sec>
</sec>
<sec id="s7">
<title>7 Conclusion</title>
<p>We propose one <italic>black box</italic> and six <italic>white box</italic> attacks against commonly implemented SNN neuron circuits by manipulating its external power supply or inducing localized power glitches. We have demonstrated power-oriented corruption of critical SNN training parameters. We introduced the attacks for SNN-based digit classification tasks as test cases and observed significant degradation in classification accuracy. We analyzed defenses techniques that leverage various SNN design parameters (such as learning rate, trace decay constant, and number of neurons) to mitigate accuracy degradation due to power-based attacks. Finally, we also proposed hardware modifications and additions to SNNs (such as robust current driver design and <italic>V</italic>
<sub>
<italic>DD</italic>
</sub> manipulation detection) as countermeasures to our proposed power-based attacks.</p>
</sec>
</body>
<back>
<sec id="s8">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="s9">
<title>Author Contributions</title>
<p>KN: Implemented hardware SNN neurons, introduced fault injection attacks, simulated Python implementation of SNN, generated results, plots, and designed all the schematics. JL: Helped with Python implementation of SNN and ran simulations. SE: Helped with developing hardware defenses against SNN power attacks. SK: Contributed in overall idea evaluation, discussion, and threat model creation. SG: Contributed in overall idea evaluation, problem identification, write-up, design debug, and result generation.</p>
</sec>
<sec id="s12">
<title>FUNDING</title>
<p>This work is supported by SRC (2847.001 and 3011.001) and NSF (CNS-1722557, CCF-1718474, DGE-1723687, DGE-1821766, OIA-2040667 and DGE-2113839).</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of Interest</title>
<p>SK was employed by the company Ampere Computing.</p>
<p>The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Azghadi</surname>
<given-names>M. R.</given-names>
</name>
<name>
<surname>Lammie</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Eshraghian</surname>
<given-names>J.&#x20;K.</given-names>
</name>
<name>
<surname>Payvand</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Donati</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Linares-Barranco</surname>
<given-names>B.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Hardware Implementation of Deep Network Accelerators towards Healthcare and Biomedical Applications</article-title>. <source>IEEE Trans. Biomed. Circuits Syst.</source> <volume>14</volume>, <fpage>1138</fpage>&#x2013;<lpage>1159</lpage>. <pub-id pub-id-type="doi">10.1109/tbcas.2020.3036081</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bagheri</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Simeone</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Rajendran</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Adversarial Training for Probabilistic Spiking Neural Networks</article-title>,&#x201d; in <conf-name>2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)</conf-name>, <conf-loc>Kalamata, Greece</conf-loc>, <conf-date>June 25&#x2013;28, 2018</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1109/spawc.2018.8446003</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barenghi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Breveglieri</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Koren</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Naccache</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Fault Injection Attacks on Cryptographic Devices: Theory, Practice, and Countermeasures</article-title>. <source>Proc. IEEE</source> <volume>100</volume>, <fpage>3056</fpage>&#x2013;<lpage>3076</lpage>. <pub-id pub-id-type="doi">10.1109/jproc.2012.2188769</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Benevenuti</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Libano</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kastensmidt</surname>
<given-names>F. L.</given-names>
</name>
<name>
<surname>Rech</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Comparative Analysis of Inference Errors in a Neural Network Implemented in Sram-Based Fpga Induced by Neutron Irradiation and Fault Injection Methods</article-title>,&#x201d; in <conf-name>2018 31st Symposium on Integrated Circuits and Systems Design (SBCCI)</conf-name>, <conf-loc>Bento Goncalves, RS</conf-loc>, <conf-date>August 27&#x2013;31, 2018</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/sbcci.2018.8533235</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bozzato</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Focardi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Palmarini</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Shaping the Glitch: Optimizing Voltage Fault Injection Attacks</article-title>. <source>Tches</source> <volume>2019</volume> (<issue>2</issue>), <fpage>199</fpage>&#x2013;<lpage>224</lpage>. <pub-id pub-id-type="doi">10.46586/tches.v2019.i2.199-224</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Breier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hou</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Jap</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bhasin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Practical Fault Attack on Deep Neural Networks</article-title>,&#x201d; in <conf-name>Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security</conf-name>, <conf-loc>Toronto, Canada</conf-loc>, <conf-date>October 15&#x2013;19, 2018</conf-date>, <fpage>2204</fpage>&#x2013;<lpage>2206</lpage>. <pub-id pub-id-type="doi">10.1145/3243734.3278519</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Khosla</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition</article-title>. <source>Int. J.&#x20;Comput. Vis.</source> <volume>113</volume>, <fpage>54</fpage>&#x2013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-014-0788-3</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>Y.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). &#x201c;<article-title>Bio-inspired Neurons Based on Novel Leaky-Fefet with Ultra-low Hardware Cost and Advanced Functionality for All-Ferroelectric Neural Network</article-title>,&#x201d; in <conf-name>2019 Symposium on VLSI Technology</conf-name>, <conf-loc>Kyoto, Japan</conf-loc>, <conf-date>June 9&#x2013;14, 2019</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>T136</fpage>&#x2013;<lpage>T137</lpage>. <pub-id pub-id-type="doi">10.23919/vlsit.2019.8776495</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davies</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Srinivasa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>T.-H.</given-names>
</name>
<name>
<surname>Chinya</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Choday</surname>
<given-names>S. H.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Loihi: A Neuromorphic Manycore Processor with On-Chip Learning</article-title>. <source>Ieee Micro</source> <volume>38</volume>, <fpage>82</fpage>&#x2013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1109/mm.2018.112130359</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Diehl</surname>
<given-names>P. U.</given-names>
</name>
<name>
<surname>Neil</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Binas</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.-C.</given-names>
</name>
<name>
<surname>Pfeiffer</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Fast-classifying, High-Accuracy Spiking Deep Networks through Weight and Threshold Balancing</article-title>,&#x201d; in <conf-name>2015 International joint conference on neural networks (IJCNN)</conf-name>, <conf-loc>Killarney, Ireland</conf-loc>, <conf-date>July 11&#x2013;16, 2015</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/ijcnn.2015.7280696</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Dutta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Saha</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Panda</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Chakraborty</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Gomez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Khanna</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). &#x201c;<article-title>Biologically Plausible Ferroelectric Quasi-Leaky Integrate and Fire Neuron</article-title>,&#x201d; in <conf-name>2019 Symposium on VLSI Technology</conf-name>, <conf-loc>Kyoto, Japan</conf-loc>, <conf-date>June 9&#x2013;14, 2019</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>T140</fpage>&#x2013;<lpage>T141</lpage>. <pub-id pub-id-type="doi">10.23919/vlsit.2019.8776487</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Goodfellow</surname>
<given-names>I. J.</given-names>
</name>
<name>
<surname>Shlens</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Szegedy</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2014</year>). <source>Explaining and Harnessing Adversarial Examples</source>. <comment>preprint arXiv:1412.6572</comment>. </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hazan</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Khan</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sanghavi</surname>
<given-names>D. T.</given-names>
</name>
<name>
<surname>Siegelmann</surname>
<given-names>H. T.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Bindsnet: A Machine Learning-Oriented Spiking Neural Networks Library in python</article-title>. <source>Front. Neuroinform.</source> <volume>12</volume>, <fpage>89</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2018.00089</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hebb</surname>
<given-names>D. O.</given-names>
</name>
</person-group> (<year>2005</year>). <source>The Organization of Behavior: A Neuropsychological Theory</source>. <publisher-name>Psychology Press</publisher-name>. </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heiberg</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kriener</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Tetzlaff</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Einevoll</surname>
<given-names>G. T.</given-names>
</name>
<name>
<surname>Plesser</surname>
<given-names>H. E.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Firing-rate Models for Neurons with a Broad Repertoire of Spiking Behaviors</article-title>. <source>J.&#x20;Comput. Neurosci.</source> <volume>45</volume>, <fpage>103</fpage>&#x2013;<lpage>132</lpage>. <pub-id pub-id-type="doi">10.1007/s10827-018-0693-9</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hou</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Breier</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jap</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bhasin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Security Evaluation of Deep Neural Network Resistance against Laser Fault Injection</article-title>,&#x201d; in <conf-name>2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA)</conf-name>, <conf-loc>Singapore</conf-loc>, <conf-date>July 20&#x2013;23, 2020</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1109/ipfa49335.2020.9261013</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Indiveri</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Linares-Barranco</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Hamilton</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Schaik</surname>
<given-names>A. v.</given-names>
</name>
<name>
<surname>Etienne-Cummings</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Delbruck</surname>
<given-names>T.</given-names>
</name>
<etal/>
</person-group> (<year>2011</year>). <article-title>Neuromorphic Silicon Neuron Circuits</article-title>. <source>Front. Neurosci.</source> <volume>5</volume>, <fpage>73</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2011.00073</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kaiser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tieck</surname>
<given-names>J.&#x20;C. V.</given-names>
</name>
<name>
<surname>Hubschneider</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wolf</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Weber</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hoff</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). &#x201c;<article-title>Towards a Framework for End-To-End Control of a Simulated Vehicle with Spiking Neural Networks</article-title>,&#x201d; in <conf-name>2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR)</conf-name>, <conf-loc>San Francisco, CA</conf-loc>, <conf-date>December 13&#x2013;16, 2016</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>127</fpage>&#x2013;<lpage>134</lpage>. <pub-id pub-id-type="doi">10.1109/simpar.2016.7862386</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kurakin</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Goodfellow</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). <article-title>Adversarial Examples in the Physical World</article-title>. <comment>[arXiv preprint arXiv:1607.02533]</comment>. </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lashkare</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chouhan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chavan</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bhat</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kumbhare</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ganguly</surname>
<given-names>U.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Pcmo Rram for Integrate-And-Fire Neuron in Spiking Neural Networks</article-title>. <source>IEEE Electron. Device Lett.</source> <volume>39</volume>, <fpage>484</fpage>&#x2013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.1109/led.2018.2805822</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maass</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>Networks of Spiking Neurons: the Third Generation of Neural Network Models</article-title>. <source>Neural networks</source> <volume>10</volume>, <fpage>1659</fpage>&#x2013;<lpage>1671</lpage>. <pub-id pub-id-type="doi">10.1016/s0893-6080(97)00011-7</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Madry</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Makelov</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Tsipras</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Vladu</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2017</year>). <source>Towards Deep Learning Models Resistant to Adversarial Attacks</source>. <comment>preprint arXiv:1706.06083</comment>. </citation>
</ref>
<ref id="B23">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Marchisio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nanfa</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Khalid</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hanif</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Martina</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shafique</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Is Spiking Secure? a Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks</article-title>,&#x201d; in <conf-name>2020 International Joint Conference on Neural Networks (IJCNN)</conf-name>, <conf-loc>Glasgow, UK</conf-loc>, <conf-date>July 19&#x2013;24 July, 2020</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>8</lpage>. </citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mead</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ismail</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2012</year>). <source>Analog VLSI Implementation of Neural Systems</source>, <volume>80</volume>. <publisher-name>Springer Science &#x26; Business Media</publisher-name>. </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehonic</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kenyon</surname>
<given-names>A. J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Emulating the Electrical Activity of the Neuron Using a Silicon Oxide Rram Cell</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>, <fpage>57</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2016.00057</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Merolla</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Arthur</surname>
<given-names>J.&#x20;V.</given-names>
</name>
<name>
<surname>Alvarez-Icaza</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cassidy</surname>
<given-names>A. S.</given-names>
</name>
<name>
<surname>Sawada</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Akopyan</surname>
<given-names>F.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface</article-title>. <source>Science</source> <volume>345</volume>, <fpage>668</fpage>&#x2013;<lpage>673</lpage>. <pub-id pub-id-type="doi">10.1126/science.1254642</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mulaosmanovic</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Chicca</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bertele</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mikolajick</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Slesazeck</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Mimicking Biological Neurons with a Nanoscale Ferroelectric Transistor</article-title>. <source>Nanoscale</source> <volume>10</volume>, <fpage>21755</fpage>&#x2013;<lpage>21763</lpage>. <pub-id pub-id-type="doi">10.1039/c8nr07135g</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanborn</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ivanov</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>A Sub-1-v Low-Noise Bandgap Voltage Reference</article-title>. <source>IEEE J.&#x20;Solid-state Circuits</source> <volume>42</volume>, <fpage>2466</fpage>&#x2013;<lpage>2481</lpage>. <pub-id pub-id-type="doi">10.1109/jssc.2007.907226</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saudargiene</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Porr</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>W&#xf6;rg&#xf6;tter</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>How the Shape of Pre- and Postsynaptic Signals Can Influence STDP: A Biophysical Model</article-title>. <source>Neural Comput.</source> <volume>16</volume>, <fpage>595</fpage>&#x2013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.1162/089976604772744929</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sebastian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Le Gallo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Krebs</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Crystal Growth within a Phase Change Memory Cell</article-title>. <source>Nat. Commun.</source> <volume>5</volume>, <fpage>4314</fpage>&#x2013;<lpage>4319</lpage>. <pub-id pub-id-type="doi">10.1038/ncomms5314</pub-id> </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tavanaei</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ghodrati</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kheradpisheh</surname>
<given-names>S. R.</given-names>
</name>
<name>
<surname>Masquelier</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Maida</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Deep Learning in Spiking Neural Networks</article-title>. <source>Neural Networks</source> <volume>111</volume>, <fpage>47</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2018.12.002</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tuma</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Pantazi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Le Gallo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sebastian</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Eleftheriou</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Stochastic Phase-Change Neurons</article-title>. <source>Nat. Nanotech</source> <volume>11</volume>, <fpage>693</fpage>&#x2013;<lpage>699</lpage>. <pub-id pub-id-type="doi">10.1038/nnano.2016.70</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Schaik</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>Building Blocks for Electronic Spiking Neural Networks</article-title>. <source>Neural networks</source> <volume>14</volume>, <fpage>617</fpage>&#x2013;<lpage>628</lpage>. <pub-id pub-id-type="doi">10.1016/s0893-6080(01)00067-3</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Venceslai</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Marchisio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Alouani</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Martina</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shafique</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Neuroattack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips</article-title>,&#x201d; in <conf-name>2020 International Joint Conference on Neural Networks (IJCNN)</conf-name>, <conf-loc>Glasgow, UK</conf-loc>, <conf-date>July 19&#x2013;24</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/ijcnn48605.2020.9207351</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Whatmough</surname>
<given-names>P. N.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Brooks</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>G.-Y.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Dnn Engine: A 28-nm Timing-Error Tolerant Sparse Deep Neural Network Processor for Iot Applications</article-title>. <source>IEEE J.&#x20;Solid-state Circuits</source> <volume>53</volume>, <fpage>2722</fpage>&#x2013;<lpage>2731</lpage>. <pub-id pub-id-type="doi">10.1109/jssc.2018.2841824</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Zussa</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Dutertre</surname>
<given-names>J.-M.</given-names>
</name>
<name>
<surname>Clediere</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tria</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2013</year>). &#x201c;<article-title>Power Supply Glitch Induced Faults on Fpga: An In-Depth Analysis of the Injection Mechanism</article-title>,&#x201d; in <conf-name>2013 IEEE 19th International On-Line Testing Symposium (IOLTS)</conf-name>, <conf-loc>Chania, Greece</conf-loc>, <conf-date>July 8&#x2013;10, 2013</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>110</fpage>&#x2013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1109/iolts.2013.6604060</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>