<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2014.00035</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The neuronal response at extended timescales: long-term correlations without long-term memory</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Soudry</surname> <given-names>Daniel</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/8857"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Meir</surname> <given-names>Ron</given-names></name>
<uri xlink:href="http://community.frontiersin.org/people/u/3061"/>
</contrib>
</contrib-group>
<aff><institution>Department of Electrical Engineering, the Laboratory for Network Biology Research</institution> <country>Technion, Haifa, Israel</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: David Hansel, University of Paris, France</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Germ&#x000E1;n Mato, Centro Atomico Bariloche, Argentina; Gianluigi Mongillo, Paris Descartes University, France</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Daniel Soudry, Department of Statistics, Center for Theoretical Neuroscience, Columbia University, 1255 Amsterdam Avenue, New York, NY 10027, USA e-mail: <email>daniel.soudry&#x00040;gmail.com</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Computational Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>01</day>
<month>04</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>35</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>12</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>03</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Soudry and Meir.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Long term temporal correlations frequently appear at many levels of neural activity. We show that when such correlations appear in isolated neurons, they indicate the existence of slow underlying processes and lead to explicit conditions on the dynamics of these processes. Moreover, although these slow processes can potentially store information for long times, we demonstrate that this does not imply that the neuron possesses a long memory of its input, even if these processes are bidirectionally coupled with neuronal response. We derive these results for a broad class of biophysical neuron models, and then fit a specific model to recent experiments. The model reproduces the experimental results, exhibiting long term (days-long) correlations due to the interaction between slow variables and internal fluctuations. However, its memory of the input decays on a timescale of minutes. We suggest experiments to test these predictions directly.</p></abstract>
<kwd-group>
<kwd>neurons</kwd>
<kwd>temporal correlations</kwd>
<kwd>long memory</kwd>
<kwd>noise</kwd>
<kwd>input&#x02013;output analysis</kwd>
<kwd>linear filters</kwd>
<kwd>power spectral density</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="0"/>
<equation-count count="60"/>
<ref-count count="48"/>
<page-count count="13"/>
<word-count count="10234"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>1. Introduction</title>
<p>Long term temporal correlations, or &#x0201C;<italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics&#x0201D; (Keshner, <xref ref-type="bibr" rid="B19">1982</xref>), are ubiquitously found at multiple levels of brain and behavior (Ward and Greenwood, <xref ref-type="bibr" rid="B48">2007</xref>, and refrences therein). For example, <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics appear in human cognition (Gilden et al., <xref ref-type="bibr" rid="B12">1995</xref>; Repp, <xref ref-type="bibr" rid="B37">2005</xref>), brain and network activity (measured using electroencephalograph or local field potentials B&#x000E9;dard et al., <xref ref-type="bibr" rid="B2">2006</xref>, and refrences therin), and even Action Potentials (APs) generated by single neurons (Musha and Yamamoto, <xref ref-type="bibr" rid="B27">1997</xref>; Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>). The presence of these long correlations in a neuron&#x00027;s AP responses suggests it is affected by processes with slow dynamics, which can retain information for long times. As a result, if these slow processes are also affected by APs, then the generation of each AP (indirectly) depends on a rather long history of the neuron&#x00027;s previous inputs and APs. This potentially allows a single neuron to perform complex computations over very long timescales. However, it remains unclear whether this type of computation indeed occurs.</p>
<p>Cortical neurons indeed contain processes taking place on multiple timescales. Many types of ion channels are known, with a large range of kinetic rates (Channelpedia, <ext-link ext-link-type="uri" xlink:href="http://channelpedia.epfl.ch/">http://channelpedia.epfl.ch/</ext-link>). Additional new sub-cellular kinetic processes are being discovered at an explosive rate (Bean, <xref ref-type="bibr" rid="B1">2007</xref>; Sj&#x000F6;str&#x000F6;m et al., <xref ref-type="bibr" rid="B41">2008</xref>; Debanne et al., <xref ref-type="bibr" rid="B7">2011</xref>). This variety is particularly large for very slow processes (Marom, <xref ref-type="bibr" rid="B26">2010</xref>). Such rich biophysical machinery can potentially modulate the generation of APs on long timescales. Evidence for such abilities was observed in recent works, which investigated how cortical neurons temporally integrate noisy current stimuli (Lundstrom et al., <xref ref-type="bibr" rid="B24">2008</xref>, <xref ref-type="bibr" rid="B23">2010</xref>; Pozzorini et al., <xref ref-type="bibr" rid="B36">2013</xref>). The temporal integration of the input was approximated using filters with power law decay, reflecting &#x0201C;long memory.&#x0201D; However, these filters were fitted only up to a timescale of about 10 s (or equivalently, frequencies smaller than 10<sup>&#x02212;1</sup> Hz), possibly due to the limited duration of the experiments, which involve intracellular recording.</p>
<p>This raises the question &#x02013; would the neuron still have long memory on timescales longer than 10 s? Generally, the answer may depend on the type of stimulus used. For example, certain ion channels may &#x0201C;remember&#x0201D; non-sparse inputs longer than sparse inputs (Soudry and Meir, <xref ref-type="bibr" rid="B44">2010</xref>). Here, we focus on the case of the sparse (AP-like) input (Figure <xref ref-type="fig" rid="F1">1</xref>), imitating the &#x0201C;natural&#x0201D; input for an axonal compartment which receives APs from a previous compartment. Such stimulation is used in various experiments (e.g., Grossman et al., <xref ref-type="bibr" rid="B14">1979</xref>; De Col et al., <xref ref-type="bibr" rid="B6">2008</xref>; Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Stimulation Regime. (A)</bold> Stimulation consists of (extracellular) sparse current spikes, with inter-stimulus intervals <italic>T</italic><sub><italic>m</italic></sub> and Action Potential (AP) occurrences <italic>Y</italic><sub><italic>m</italic></sub>. <bold>(B)</bold> An AP &#x0201C;occurred&#x0201D; if the voltage <italic>V</italic> crossed a threshold <italic>V</italic><sub><italic>th</italic></sub> following the (sparse) stimulus, with <italic>T</italic><sub><italic>m</italic></sub> &#x0226B; &#x003C4;<sub>AP</sub>.</p></caption>
<graphic xlink:href="fncom-08-00035-g0001.tif"/>
</fig>
<p>We find general conditions under which a neuron can generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics in its spiking activity, and show that this does not imply that a neuron has long memory of its history. Specifically, in order to generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics slow processes should span a wide range of timescales with slower processes having a higher level of internally generated fluctuations (e.g., more &#x0201C;noisy,&#x0201D; due to lower ion channel numbers). However, in a minimal model that generates this behavior, slow processes do not retain memory of the input fluctuations beyond a finite &#x0201C;short&#x0201D; timescale, even though they are affected by the membrane&#x00027;s voltage. A main reason for this is that the &#x0201C;fastest adaptation process&#x0201D; in the model adjusts the neuron&#x00027;s response in such a way that any perturbation in the response is canceled out, before slower processes are affected.</p>
<p>We fit the minimal model to the days-long experiments in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>), where synaptically isolated individual neurons, from a rat cortical culture, were stimulated with extra-cellular sparse current pulses for an unprecedented duration of days. The neurons exhibited <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics, responding in a complex and irregular manner from seconds to days. The synaptic isolation of the neurons in the network, and their low cross-correlations indicate that these <italic>f</italic><sup>&#x02212;&#x003B1;</sup> fluctuations are internally generated in the neurons (Appendix D). We are able to reproduce their results (Figure <xref ref-type="fig" rid="F3">3</xref>), and predict that the neuron should remember perturbations in its input for about 10<sup>2</sup> s (Figure <xref ref-type="fig" rid="F4">4</xref>). We suggest further experiments to test these predictions (Figure <xref ref-type="fig" rid="F5">5</xref>).</p>
<p>The remainder of the paper is organized as follows. We begin in section 2.1 by presenting the basic setup. Then, in section 2.2, we present the general framework for biophysical modeling of neurons. Working in this framework, in section 2.3 we recall the mathematical formalism from Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>) and derive the power spectrum density for periodic input stimuli. Following a description of <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior in section 3.1, we provide in section 3.2 both general and &#x0201C;minimal&#x0201D; conditions for a neuron to display such scaling. In section 3.3 we consider the implications of the model for the input&#x02013;output relation of the neuron, given general stationary inputs. In section 3.4 we demonstrate this numerically in a specific biophysical model which is fitted to the experimental results of Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>). We conclude in section 4 with a summary and discussion of our results. An extensive appendix contains many of the technical details used throughout the paper (see Supplementary Material).</p>
</sec>
<sec sec-type="methods" id="s2">
<title>2. Methods</title>
<sec>
<title>2.1. Preliminaries</title>
<p>In our notation &#x02329;&#x000B7;&#x0232A; is an ensemble average, <italic>i</italic> &#x0225C; <inline-formula><mml:math id="M1"><mml:mrow><mml:msqrt><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msqrt></mml:mrow></mml:math></inline-formula>, a non-capital boldfaced letter <bold>x</bold> &#x0225C; (<italic>x</italic><sub>1</sub>, &#x02026;, <italic>x</italic><sub><italic>n</italic></sub>)<sup>&#x022A4;</sup> is a column vector [where (&#x000B7;)<sup>&#x022A4;</sup> denotes transpose], and a boldfaced capital letter <bold>X</bold> is a matrix (with components <italic>X</italic><sub><italic>mn</italic></sub>).</p>
<sec>
<title>2.1.1. Stimulation</title>
<p>As in Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>) we examine a single, synaptically isolated, excitable neuron under &#x0201C;spike&#x0201D; stimulation. In this stimulation regime, the stimulation current, <italic>I</italic>(<italic>t</italic>), consists of a train of identical short pulses arriving at times <italic>t</italic><sub><italic>m</italic></sub> and amplitude <italic>I</italic><sub>0</sub>. The intervals between the stimulation times are denoted <italic>T</italic><sub><italic>m</italic></sub> &#x0225C; <italic>t</italic><sub><italic>m</italic> &#x0002B; 1</sub> &#x02212; <italic>t</italic><sub><italic>m</italic></sub> (Figure <xref ref-type="fig" rid="F1">1A</xref>, <italic>top</italic>). We assume that the stimulation is sparse, i.e., <italic>T</italic><sub><italic>m</italic></sub> &#x0226B; &#x003C4;<sub>AP</sub>, with &#x003C4;<sub>AP</sub> being the timescale of an AP (Figure <xref ref-type="fig" rid="F1">1B</xref>). Since the neuron is &#x0201C;excitable&#x0201D; it does not generate APs unless stimulated, as in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>) (i.e., the neuron is neither oscillatory nor spontaneously firing). However, after a stimulation the neuron can either respond with a detectable AP or not respond. We denote AP occurrences a <italic>Y</italic><sub><italic>m</italic></sub>, where <italic>Y</italic><sub><italic>m</italic></sub> &#x0003D; 1 if an AP occurred immediately after the <italic>m</italic>-th stimulation, and 0 otherwise (Figure <xref ref-type="fig" rid="F1">1A</xref>, <italic>bottom</italic>). Note also that <italic>Y</italic><sub><italic>m</italic></sub> is not generally the same as the common count process generated from the APs by binning them into equally sized bins (Appendix B.1) &#x02013; unless <italic>T</italic><sub><italic>m</italic></sub> is constant and equal to the bin size.</p>
</sec>
<sec>
<title>2.1.2. Statistics</title>
<p>We assume both <italic>Y</italic><sub><italic>m</italic></sub> and <italic>T</italic><sub><italic>m</italic></sub> are wide-sense stationary (Papoulis and Pillai, <xref ref-type="bibr" rid="B35">1965</xref>). We denote <italic>p</italic><sub>&#x0002A;</sub> &#x0225C; &#x02329; <italic>Y</italic><sub><italic>m</italic></sub> &#x0232A; to be the mean probability to generate an AP and <italic>T</italic><sub>&#x0002A;</sub> &#x0225C; &#x02329; <italic>T</italic><sub><italic>m</italic></sub> &#x0232A; as the mean stimulation period. Furthermore, we denote <italic>&#x00176;</italic><sub><italic>m</italic></sub> &#x0225C; <italic>Y</italic><sub><italic>m</italic></sub> &#x02212; <italic>p</italic><sub>&#x0002A;</sub> and <inline-formula><mml:math id="M2"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula><sub><italic>m</italic></sub> &#x0003D; <italic>T</italic><sub><italic>m</italic></sub> &#x02212; <italic>T</italic><sub>&#x0002A;</sub> as the perturbations of <italic>Y</italic><sub><italic>m</italic></sub> and <italic>T</italic><sub><italic>m</italic></sub> from their means. An important tool in quantifying the statistics of signals is the power spectral density (PSD), namely the Fourier transform of the auto-covariance (Papoulis and Pillai, <xref ref-type="bibr" rid="B35">1965</xref>). For analytical convenience, in this work we will use a PSD of the form</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M3"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:munderover><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>with 0 &#x02264; <italic>f</italic> &#x0226A; <italic>T</italic><sup>&#x02212;1</sup><sub>&#x0002A;</sub> in Hertz frequency units. Note that this PSD is proportional to the PSD of the common binned AP (Equation 70), under periodical stimulus and for low frequencies &#x02013; which is the regime under which we will investigate the PSD (similarly to the experiment Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>). We similarly define the PSD</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M4"><mml:msub><mml:mi>S</mml:mi><mml:mi>T</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:munderover><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:msub><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>and the cross-PSD</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M5"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>Y</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:munderover><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:msub><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:math></disp-formula>
</sec>
</sec>
<sec>
<title>2.2. General framework</title>
<p>We model the neuron in the standard framework of biophysical neural models &#x02013; i.e., Conductance Based Models (CBMs). However, rather than focusing only on a specific model, we establish general results about a broad class of models. In this framework, the voltage dynamics of an isopotential neuron are determined by ion channels, protein pores which change conformations stochastically with voltage-dependent rates (Hille, <xref ref-type="bibr" rid="B16">2001</xref>). On the population level, such dynamics are generically very well described by models of the form of Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>), (Equations 4&#x02013;6)</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M6"><mml:mover accent='true'><mml:mi>V</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>V</mml:mi><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>r</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>,</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="E5"><label>(5)</label><mml:math id="M7"><mml:mover><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>r</mml:mi></mml:mstyle><mml:mo>.</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>r</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>b</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>V</mml:mi><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>r</mml:mi></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>&#x003BE;</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub></mml:math></disp-formula>
<disp-formula id="E6"><label>(6)</label><mml:math id="M8"><mml:mover><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>.</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>b</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>V</mml:mi><mml:mo>,</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>&#x003BE;</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub></mml:math></disp-formula>
<p>with <bold>voltage</bold> <italic>V</italic>, stimulation current <italic>I</italic>(<italic>t</italic>), <bold>rapid</bold> variables <bold>r</bold> (e.g., <italic>m</italic>, <italic>n</italic>, <italic>h</italic> in the Hodgkin&#x02013;Huxley (HH) model Hodgkin and Huxley, <xref ref-type="bibr" rid="B17">1952</xref>), <bold>slow</bold> &#x0201C;excitability&#x0201D; variables <bold>s</bold> &#x02208; [0, 1]<sup><italic>M</italic></sup> (e.g., slow sodium inactivation Chandler and Meves, <xref ref-type="bibr" rid="B4">1970</xref>), white noise processes &#x003BE;<sub><italic>r</italic>/<italic>s</italic></sub> (with zero mean and unit variance). Also, the matrices <bold>A</bold><sub><italic>r</italic>/<italic>s</italic></sub> and the vectors <bold>b</bold><sub><italic>r</italic>/<italic>s</italic></sub> can be written explicitly using the kinetic rates of the ion channels, while the matrices <bold>B</bold><sub><italic>r</italic>/<italic>s</italic></sub> can be written using those rates in addition to ion channel numbers. Lastly, we denote</p>
<disp-formula id="E7"><mml:math id="M9"><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>D</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>r</mml:mi></mml:msub><mml:msubsup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>r</mml:mi><mml:mo>&#x022A4;</mml:mo></mml:msubsup><mml:mo>;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>D</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>s</mml:mi></mml:msub><mml:msubsup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>B</mml:mi></mml:mstyle><mml:mi>s</mml:mi><mml:mo>&#x022A4;</mml:mo></mml:msubsup></mml:mrow></mml:math></disp-formula>
<p>as the diffusion matrices (Orio and Soudry, <xref ref-type="bibr" rid="B32">2012</xref>). In these models the voltage and the rapid variables constitute the AP generation, while the slow variables modulate the excitability of the cell. For simplicity, we assumed that <bold>r</bold> and <bold>s</bold> are not coupled directly, but this is non-essential (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>). The parameter space can be constrained (Soudry and Meir, <xref ref-type="bibr" rid="B45">2012</xref>), since we consider here only <italic>excitable</italic>, non-oscillatory neurons which do not fire spontaneously<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> and which have a single resting state &#x02013; as is common for <italic>isolated</italic> cortical cells, e.g., Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>).</p>
</sec>
<sec>
<title>2.3. The power spectral density of the response</title>
<p>PSD-based estimators are central tools in quantifying long term correlations (Robinson, <xref ref-type="bibr" rid="B38">2003</xref>; Lowen and Teich, <xref ref-type="bibr" rid="B22">2005</xref>), and are commonly used in experimental settings &#x02013; as in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>). Therefore, in this section we focus on the PSD of the neural response under sparse stimulation regime (section 2.1) of a CBM (section 2.2).</p>
<sec>
<title>2.3.1. Recap &#x02013; previous mathematical results</title>
<p>Typically, CBMs (Equations 4&#x02013;6) contain many unknown parameters, and are highly non-linear. Therefore, it is quite hard to fit them using a purely simulation based approach, especially over long timescales, where simulations are long and models have more unknown parameters. Therefore, we developed a reduction method that simplifies analysis and enables fitting of such models. We refer the reader to Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>) for full mathematical details.</p>
<p>In this method, we semi-analytically<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> reduce the full model (Equations 4&#x02013;6) to a simplified model, under the assumption that the timescales of rapid and slow variables are well separated. Given another assumption, that the neuron dynamics are sufficiently &#x0201C;noisy,&#x0201D; we can linearize the model dynamics, so that (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 12)</p>
<disp-formula id="E8"><label>(7)</label><mml:math id="M10"><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>m</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>e</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where <italic>e</italic><sub><italic>m</italic></sub> is a white-noise signal with zero mean and variance &#x003C3;<sup>2</sup><sub><italic>e</italic></sub> &#x0225C; <italic>p</italic><sub>&#x0002A;</sub> &#x02212; <italic>p</italic><sup>2</sup><sub>&#x0002A;</sub> (recall <italic>p</italic><sub>&#x0002A;</sub> is the mean probability to generate an AP) and <bold>s</bold><sub>&#x0002A;</sub> (the excitability fixed point) and <italic>w</italic><sub><italic>j</italic></sub> (an &#x0201C;effective weight&#x0201D; of component <italic>s</italic><sub><italic>j</italic></sub>) can be found self-consistently together with <italic>p</italic><sub>&#x0002A;</sub> as a function of <italic>T</italic><sub>&#x0002A;</sub> (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 10). After these quantities are found, an expression for the output PSD <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) in this model can be written explicitly. We let <italic>X</italic><sub>&#x0002B;</sub>, <italic>X</italic><sub>&#x02212;</sub> and <italic>X</italic><sub>0</sub> denote the averages of the quantity <italic>X</italic><sub><italic>s</italic></sub> during an AP response, a failed AP response and rest, respectively. Also, we denote</p>
<disp-formula id="E9"><mml:math id="M11"><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mtext>AP</mml:mtext></mml:mrow></mml:msub><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msub><mml:mi>X</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mtext>AP</mml:mtext></mml:mrow></mml:msub><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math></disp-formula>
<p>as the steady state mean value of <italic>X</italic><sub><italic>s</italic></sub> [this would be <italic>X</italic>(<italic>p</italic><sub>&#x0002A;</sub>, <italic>T</italic><sub>&#x0002A;</sub>) in Equation 7 in Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>]. For example, <bold>A</bold><sub>&#x0002A;</sub> and <bold>D</bold><sub>&#x0002A;</sub> are the respective steady state means of <bold>A</bold><sub><italic>s</italic></sub> and <bold>D</bold><sub><italic>s</italic></sub>. Additionally, we denote (definition below Equation 12 in Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>)</p>
<disp-formula id="E10"><label>(8)</label><mml:math id="M12"><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>a</mml:mi></mml:mstyle><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mtext>AP</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mo>+</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>b</mml:mi></mml:mstyle><mml:mo>+</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>b</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>as a &#x0201C;feedback&#x0201D; vector (see Figure <xref ref-type="fig" rid="F1">1C</xref> in Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>) to understand this interpretation), and (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 14)</p>
<disp-formula id="E11"><label>(9)</label><mml:math id="M13"><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>I</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>a</mml:mi></mml:mstyle><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula>
<p>as the &#x0201C;closed loop transfer function&#x0201D; (including the effect of the feedback), with <bold>I</bold> being the identity matrix. Using the above notation, we can derive the PSD of the response. Given a periodical stimulation (<inline-formula><mml:math id="M14"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula><sub><italic>m</italic></sub> &#x0003D; 0) we obtain (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 13)</p>
<disp-formula id="E12"><label>(10)</label><mml:math id="M15"><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>D</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi><mml:mo>&#x022A4;</mml:mo></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>+</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mi>&#x003C3;</mml:mi><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:msup><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msubsup><mml:mi>T</mml:mi><mml:mo>*</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>a</mml:mi></mml:mstyle></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>Though Equation (10) relies on two simplifying assumptions, extensive numerical simulations (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Figures <xref ref-type="fig" rid="F3">3</xref>&#x02013;<xref ref-type="fig" rid="F5">5</xref>) showed that this expression is rather robust and remains accurate in many cases even if these assumptions do not hold. Therefore, in this work we will always assume that Equation (10) is accurate.</p>
</sec>
<sec>
<title>2.3.2. The effect of feedback</title>
<p>In the neuron, the slow excitability variables <bold>s</bold> affect the response of the neuron, which, in turn, affects the dynamics of the slow excitability variables. To simplify analysis, it is desirable to &#x0201C;isolate&#x0201D; this feedback effect. In order to do this, we apply the Sherman Morrison lemma to Equation (9),</p>
<disp-formula id="E13"><mml:math id="M16"><mml:mrow><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>a</mml:mi></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>with</p>
<disp-formula id="E14"><label>(11)</label><mml:math id="M17"><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>I</mml:mi></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula>
<p>being the &#x0201C;open loop&#x0201D; version of <bold>H</bold><sub><italic>c</italic></sub> (<italic>f</italic>) (i.e., if <italic><bold>a</bold></italic> is set to zero). Using this in Equation (10) we obtain</p>
<disp-formula id="E15"><label>(12)</label><mml:math id="M18"><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>S</mml:mi><mml:mi>Y</mml:mi><mml:mi>o</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>&#x003BA;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>with <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) being the &#x0201C;open loop&#x0201D; version of <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) (i.e., <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) with <italic><bold>a</bold></italic> set to zero),</p>
<disp-formula id="E16"><label>(13)</label><mml:math id="M19"><mml:msubsup><mml:mi>S</mml:mi><mml:mi>Y</mml:mi><mml:mi>o</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mi>&#x003C3;</mml:mi><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>+</mml:mo><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>D</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi><mml:mo>&#x022A4;</mml:mo></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle></mml:math></disp-formula>
<p>and &#x003BA; (<italic>f</italic>) determines the effect of the feedback</p>
<disp-formula id="E17"><label>(14)</label><mml:math id="M20"><mml:mi>&#x003BA;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>o</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>a</mml:mi></mml:mstyle><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Note that <italic>&#x003BA;</italic> (<italic>f</italic>) depends on the feedback through the variable <bold>a</bold>. If <bold>a</bold> &#x02192; 0, for example, the kinetic rates of <bold>s</bold> are not sensitive to AP occurrences<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>. In that case <italic>&#x003BA;</italic> (<italic>f</italic>) &#x02192; 1 and <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x02192; <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>).</p>
</sec>
<sec>
<title>2.3.3. Partial fractions decomposition</title>
<p>In order to simplify analysis, we decompose the vector expressions in Equations (13, 14) to partial fractions.</p>
<p>If <bold>A</bold><sub>&#x0002A;</sub> is diagonalizable, than we can write Equation (13) as (Appendix A.1)</p>
<disp-formula id="E18"><label>(15)</label><mml:math id="M21"><mml:msubsup><mml:mi>S</mml:mi><mml:mi>Y</mml:mi><mml:mi>o</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mi>&#x003C3;</mml:mi><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>+</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where the poles &#x003BB;<sub><italic>k</italic></sub> are the inverse timescales of the slow variables (the eigenvalues of <bold>A</bold><sub>&#x0002A;</sub>), arranged from large to small according to their magnitudes (0 &#x0003C; |&#x003BB;<sub><italic>M</italic></sub>| &#x0003C; |&#x003BB;<sub><italic>M</italic> &#x02212; 1</sub>| &#x0003C; &#x02026; &#x0003C; |&#x003BB;<sub>1</sub>|) and</p>
<disp-formula id="E19"><label>(16)</label><mml:math id="M22"><mml:msub><mml:mi>c</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>w</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula>
<p>being the amplitude of these poles, with <italic>D</italic><sub><italic>kj</italic></sub> and <italic>w</italic><sub><italic>k</italic></sub> being the respective components of <bold>D</bold><sub>&#x0002A;</sub> and <bold>w</bold> in a basis in which <bold>A</bold><sub>&#x0002A;</sub> is diagonal. Note that, &#x02200;<italic>k</italic>, Re [&#x003BB;<sub><italic>k</italic></sub>] &#x0003C; 0 (from the properties of <bold>A</bold><sub>&#x0002A;</sub>).</p>
<p>Using a similar derivation for &#x003BA; (<italic>f</italic>), we obtain</p>
<disp-formula id="E20"><label>(17)</label><mml:math id="M23"><mml:mi>&#x003BA;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:msub><mml:mi>a</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>with <italic>a</italic><sub><italic>k</italic></sub> and <italic>w</italic><sub><italic>k</italic></sub> being the respective components of <bold>a</bold> and <bold>w</bold> in a base in which <bold>A</bold><sub>&#x0002A;</sub> is diagonal.</p>
</sec>
<sec>
<title>2.3.4. Example &#x02013; a &#x0201C;diagonal&#x0201D; model</title>
<p>For concreteness, we demonstrate our results on a simple model in which <bold>A</bold><sub>&#x0002A;</sub> is a diagonal matrix and, as a result, <bold>D</bold><sub>&#x0002A;</sub> (which depends on <bold>A</bold><sub>&#x0002A;</sub>) is also diagonal. In this &#x0201C;diagonal&#x0201D; model all the components of <bold>s</bold> are uncoupled (i.e., belong to different channel types), Equation (6) can be written as (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, section 4.1)</p>
<disp-formula id="E21"><label>(18)</label><mml:math id="M24"><mml:msub><mml:mover accent='true'><mml:mi>s</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>s</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>V</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>s</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>&#x003BE;</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>
<p>&#x02200; <italic>k</italic>&#x02208;{1, &#x02026;, <italic>M</italic>}, where &#x003C3;<sub><italic>s</italic>,<italic>k</italic></sub>(<italic>V</italic>, <italic>s</italic>) &#x0003D; [(&#x003B4;<sub><italic>k</italic></sub>(<italic>V</italic>)(1 &#x02212; <italic>s</italic><sub><italic>k</italic></sub>) &#x0002B; &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>)<italic>s</italic><sub><italic>k</italic></sub>) <italic>N</italic><sup> &#x02212; 1</sup><sub><italic>s</italic>,<italic>k</italic></sub>]<sup>1/2</sup> and <italic>N</italic><sub><italic>s</italic>,<italic>k</italic></sub> are the number of slow ion channels of type <italic>k</italic>. Similarly as before, &#x003B3;<sub>&#x0002B;,<italic>k</italic></sub>,&#x003B3;<sub>&#x02212;,<italic>k</italic></sub> and &#x003B3;<sub>0,<italic>k</italic></sub> denote the averages of the kinetic rate &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>) during an AP response, a failed AP response and rest, respectively. In addition</p>
<disp-formula id="E22"><mml:math id="M25"><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mtext>AP</mml:mtext></mml:mrow></mml:msub><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>+</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mtext>AP</mml:mtext></mml:mrow></mml:msub><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></disp-formula>
<p>is the average &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>) in steady state. We use a similar notation for &#x003B4;. Therefore</p>
<disp-formula id="E23"><mml:math id="M26"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>D</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>with zero on all other (non-diagonal) components and</p>
<disp-formula id="E24"><label>(19)</label><mml:math id="M27"><mml:msub><mml:mi>a</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>A</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>+</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>+</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Therefore, in Equations (15, 16) we have,</p>
<disp-formula id="E25"><label>(20)</label><mml:math id="M28"><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>
<disp-formula id="E26"><label>(21)</label><mml:math id="M29"><mml:msub><mml:mi>c</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>w</mml:mi><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>&#x02217;</mml:mo><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Importantly, by tuning the parameters <italic>M</italic>, &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>), &#x003B4;<sub><italic>k</italic></sub>(<italic>V</italic>), <italic>N</italic><sub><italic>s</italic>,<italic>k</italic></sub> and <italic>w</italic><sub><italic>k</italic></sub> we seem to have complete freedom in determining &#x003BB;<sub><italic>k</italic></sub>, <italic>c</italic><sub><italic>k</italic></sub> and <italic>a</italic><sub><italic>k</italic></sub> (Equations 19&#x02013;21). This, in turn, would give complete freedom in tuning <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) and &#x003BA; (<italic>f</italic>). Therefore, it seems that for any CBM (i.e., not only diagonal models) we can find an <italic>equivalent</italic> diagonal model &#x02013; which produces exactly the same PSD of the response.</p>
<p>The only caveat in the previous argument is that in non-diagonal models &#x003BB;<sub><italic>k</italic></sub> can be complex, but not in a diagonal model, since the kinetic rates &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>) and &#x003B4;<sub><italic>k</italic></sub>(<italic>V</italic>) must be real numbers. How would the situation change if some of the poles had complex values? Complex poles (i.e., for which Im[&#x003BB;<sub><italic>k</italic></sub>] &#x0003E; 0) always come in conjugate pairs. These pairs behave asymptotically (i.e., for 2&#x003C0; <italic>f</italic> &#x0226B; |&#x003BB;<sub><italic>k</italic></sub>| or 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub><italic>k</italic></sub>|) very similarly to two real poles, with an additional &#x0201C;resonance&#x0201D; (either a bump or depression) in a narrow range in the vicinity of these poles (i.e., 2&#x003C0; <italic>f</italic> &#x0007E; |&#x003BB;<sub><italic>k</italic></sub>|) (see Appendix A.2, or Oppenheim et al., <xref ref-type="bibr" rid="B31">1983</xref>).</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Background on <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics</title>
<p>As observed in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>), the responses of isolated neurons exhibit long-term correlations robustly <xref ref-type="fn" rid="fn0004"><sup>4</sup></xref>, under sparse pulse stimulation (Figure <xref ref-type="fig" rid="F1">1</xref> and section 2.1). Signals with such long-term correlations are often described by the term &#x0201C;<italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise.&#x0201D; This is because the Power Spectral Density (PSD, Papoulis and Pillai, <xref ref-type="bibr" rid="B35">1965</xref>) is a central tool in detecting and quantifying such signals (Robinson, <xref ref-type="bibr" rid="B38">2003</xref>; Lowen and Teich, <xref ref-type="bibr" rid="B22">2005</xref>). As the name implies, if the AP pattern <italic>Y</italic><sub><italic>m</italic></sub> is a &#x0201C;<italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise signal&#x0201D; then its PSD (Equation 10) has a <italic>f</italic><sup>&#x02212;&#x003B1;</sup> shape</p>
<disp-formula id="E27"><label>(22)</label><mml:math id="M30"><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0221D;</mml:mo><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where the PSD is defined here as in Equation (1). As is usually the case for most <italic>f</italic><sup> &#x02212; &#x003B1;</sup> phenomena, Equation (22) is true only in a certain range <italic>f</italic><sub>min</sub> &#x02264; <italic>f</italic> &#x02264; <italic>f</italic><sub>max</sub>, and with 0 &#x0003C; &#x003B1; &#x02264;2. Note also that if &#x003B1; &#x0003E; 1, then <italic>f</italic><sub>min</sub> &#x0003E; 0 necessarily<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>. Such <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior is considered interesting due to its &#x0201C;scale-free&#x0201D; properties, which can sometimes indicate a &#x0201C;long memory,&#x0201D; as explained in the introduction. Therefore, it is interesting to ask the following questions:</p>
<list list-type="order">
<list-item><p>What is the biophysical origin of the <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior?</p></list-item>
<list-item><p>Does this <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior imply that the neuron &#x0201C;remembers&#x0201D; its history on very long timescales (hours and days)?</p></list-item>
</list>
<p>We aim to answer the first question in section 3.2, focusing on the case of periodical stimulation <italic>T</italic><sub><italic>m</italic></sub> &#x0003D; <italic>T</italic><sub>&#x0002A;</sub>, as in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>). The second question is addressed in section 3.3, where we examine a general sparse stimulation process <italic>T</italic><sub><italic>m</italic></sub>. Finally, in section 3.4.2 we fit a specific CBM (which is an extension of a previous CBM) so it adheres to this set of minimal constraints. We numerically reproduce the experimental results of Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>) and demonstrate our predictions.</p>
</sec>
<sec>
<title>3.2. Biophysical modeling of <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics</title>
<p>As we explained in the introduction, neurons contain a large variety of processes operating on slow timescales. These processes are, in many cases, not well characterized or contain unknown parameters. Therefore, it is hard to model the behavior of the neuron on slow timescales with a CBM using only simulation. With so many unknowns, an exhaustive parameter search is unfeasible <xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>. Fortunately, since we derived a semi-analytic expression for the PSD (Equation 10), starting from some initial &#x0201C;guess&#x0201D; (as to which process to include, and with what parameters), it is relatively straightforward to tune the parameters so that the CBM reproduces the experimental results (i.e., by maximizing some &#x0201C;goodness of fit&#x0201D; measure).</p>
<p>However, even if a specific model could be found to reproduce the experimental results, it would still be unclear whether or not this is would be a &#x0201C;useful&#x0201D; model &#x02013; one which can be used to infer the biophysical properties of the neuron, or its response to untested inputs. The first problem is that CBMs are highly degenerate, where different parameter values can generate similar behaviors<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>, so we can never be sure if the &#x0201C;correct&#x0201D; model was inferred. The second problem is that it is unclear whether a &#x0201C;correct&#x0201D; model would be generally useful &#x02013; since different neurons from the same type can have very different parameters (Marder and Goaillard, <xref ref-type="bibr" rid="B25">2006</xref>).</p>
<p>In order to address the first problem, initially, in section 3.2.1 we analyze Equation (10), and attempt to answer a more general question &#x02013; what class of CBM models can generate the experimental results? We find &#x0201C;rather general&#x0201D; sufficient conditions &#x02013; i.e., which, given a few assumptions, also become necessary conditions. Next, in section 3.2.2, we aim to find a &#x0201C;minimal&#x0201D; set of constraint on a CBM to fulfill theses conditions. Qualitatively, these conditions indicate that, in order to reproduce the experimental results, a general CBM must:</p>
<list list-type="order">
<list-item><p>Include only a finite number of ion channels of each type (implying a stochastic model).</p></list-item>
<list-item><p>Include few slow processes with timescales &#x0201C;covering&#x0201D; the range of timescales over which <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> is observed.</p></list-item>
<list-item><p>Obey a certain scaling relation (with an exponent of 1&#x02212;&#x003B1;), implying that slower processes are more &#x0201C;noisy.&#x0201D;</p></list-item>
</list>
<p>More detailed explanations of these conditions, and a concrete example, are provided in the following two subsections.</p>
<sec>
<title>3.2.1. General conditions for <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics</title>
<p>In this section we derive general conditions on the parameters of a CBM (section 2.2) so it can generate robust <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics in <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>). Here, we focus on the case of sparse periodical input <italic>T</italic><sub><italic>m</italic></sub> &#x0003D; <italic>T</italic><sub>&#x0002A;</sub> &#x0226B; &#x003C4;<sub>AP</sub> (as in Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>).</p>
<p>This analysis is based on the decomposition of the PSD <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) as a ratio of <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) and the feedback term |&#x003BA; (<italic>f</italic>)|<sup>2</sup>. Recall that <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> is robustly observed for all stimulation parameters &#x02013; even when <italic>p</italic><sub>&#x0002A;</sub> is near 0 or 1 (see section 3.1). Note that one can arbitrarily vary <italic>p</italic><sub>&#x0002A;</sub> by changing the stimulation parameters (such as <italic>I</italic><sub>0</sub> or <italic>T</italic><sub>&#x0002A;</sub>). It is straightforward to show that when <italic>p</italic><sub>&#x0002A;</sub> &#x02192; 0 or <italic>p</italic><sub>&#x0002A;</sub> &#x02192; 1, the effect of feedback is negligible<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref>, and therefore <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x02248; <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>). This implies that, at least for some simulation parameters, <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>. For this reason, and for the sake of analytical simplicity, we first develop general conditions so that <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>, and later we discuss the effects of the feedback &#x003BA; (<italic>f</italic>).</p>
<p>Note from Equation (15) that if <italic>M</italic> (the dimension of <bold>s</bold> &#x02013; the number of slow processes) is finite, one can have <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> exactly if and only if &#x003B1; &#x0003D; 0 or 2. However, these values are far from what was measured experimentally (Equation 42). Therefore, <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> can be generated exactly only in some limit (in which <italic>M</italic> is infinite), or approximately (if <italic>M</italic> is finite). Also, note that if 2&#x003C0; <italic>f</italic> &#x0226B;|&#x003BB;<sub>1</sub>|, then <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x02212; <italic>T</italic><sub>&#x0002A;</sub> &#x003C3;<sup>2</sup><sub><italic>e</italic></sub> &#x0221D; <italic>f</italic><sup>&#x02212;2</sup>. Additionally, if 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub><italic>M</italic></sub>|, we have <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x02248; constant. Therefore, Equation (15) can generate <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> with 0 &#x0003C; &#x003B1; &#x0003C; 2 only for |&#x003BB;<sub><italic>M</italic></sub>| &#x0003C; 2&#x003C0; <italic>f</italic> &#x0003C; |&#x003BB;<sub>1</sub>|.</p>
<p>Next, we explain when this becomes possible. For simplicity assume that in Equation (15) <italic>T</italic><sub>&#x0002A;</sub> &#x003C3;<sup>2</sup><sub><italic>e</italic></sub> is negligible and all the poles are real (the effect of complex poles will be discussed below). We define the following pole density</p>
<disp-formula id="E28"><label>(23)</label><mml:math id="M31"><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0225C;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mi>&#x003B4;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>&#x003BB;</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where &#x003B4;(&#x000B7;) is Dirac&#x00027;s delta function. Using Equations (15 and 23) we obtain</p>
<disp-formula id="E29"><label>(24)</label><mml:math id="M32"><mml:msubsup><mml:mi>S</mml:mi><mml:mi>Y</mml:mi><mml:mi>o</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mi>d</mml:mi><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>&#x003BB;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>For |&#x003BB;<sub><italic>M</italic></sub>| &#x0226A; 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub>1</sub>| and 0 &#x0003C; &#x003B1; &#x0003C; 2, Equation (24) becomes</p>
<disp-formula id="E30"><label>(25)</label><mml:math id="M33"><mml:msubsup><mml:mi>S</mml:mi><mml:mi>Y</mml:mi><mml:mi>o</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula>
<p>if and <bold>only</bold> if (Appendix A.3)</p>
<disp-formula id="E31"><label>(26)</label><mml:math id="M34"><mml:mi>&#x003C1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003C1;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>&#x003B1;</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula>
<p>in the range |&#x003BB;<sub>1</sub>| &#x0003E; |&#x003BB;| &#x0003E; |&#x003BB;<sub><italic>M</italic></sub>|, with <italic>&#x003C1;</italic><sub>0</sub> &#x0003D; 2&#x003C0;<sup>&#x02212;1</sup><italic>C</italic> sin(&#x003C0;&#x003B1;/2). Therefore, <italic>&#x003C1;</italic>(&#x003BB;), the distribution of the poles, must scale similarly to <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) (but with a different exponent).</p>
<p>Several comments are in order at this point.</p>
<list list-type="order">
<list-item><p>It was previously known that, in a linear system, a <italic>f</italic><sup>&#x02212;&#x003B1;</sup> PSD could be generated using a similarly scaled sum of real poles (Keshner, <xref ref-type="bibr" rid="B18">1979</xref>, <xref ref-type="bibr" rid="B19">1982</xref>). The novelty here is two-fold: (1) Quantitatively analyzing the PSD of CBMs (which are highly non-linear) in a similar way (through Equation 10) (2) Finding that condition 26 is not only sufficient, but necessary.</p></list-item>
<list-item><p>Formally, Equation (26) can be exact only in the continuum limit where the number poles is infinite and they are closely packed. However, in practice, Equation (25) remains a rather accurate approximation even if the poles are few and well separated (Figure <xref ref-type="fig" rid="F2">2A</xref>), as we shall demonstrate in the next section (as in Keshner, <xref ref-type="bibr" rid="B18">1979</xref>, <xref ref-type="bibr" rid="B19">1982</xref>). Clearly, for simulation purposes, it is beneficial to use a CBM with a finite number of (preferably, few) poles.</p></list-item>
<list-item><p>We have assumed that all the poles are real. What happens if some of the poles are complex? Recall (section 2.3.4) that if some poles have complex values then <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) also has &#x0201C;resonances&#x0201D; (bumps or depressions) in a narrow range near these poles. Technically, scaling these resonance peaks can also be used to approximate Equation (25) (Figure <xref ref-type="fig" rid="F2">2B</xref>). However, we did not pursue this method here since it would require significantly more poles and would be much harder to implement.</p></list-item>
<list-item><p>Note that so far we have discussed only <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>). One can perform a similar analysis directly on <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>). However, we find it is easier to first simplify &#x003BA; (<italic>f</italic>) and then use Equation (12). From Equation (12) the PSD <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) will have a power-law shape in the range |&#x003BB;<sub><italic>M</italic></sub>|&#x0226A;2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub>1</sub>| if, in that range: either (1) the magnitude of &#x003BA; (<italic>f</italic>) is constant or slowly varying, or (2) &#x003BA; (<italic>f</italic>) also has a power-law shape. In the first case the exponent of <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) will be the same as the exponent of <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>), and in the second case the exponent will differ. The conditions for both cases can be derived similarly to our analysis of <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>). We demonstrate this next, in a more specific context.</p></list-item>
</list>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Generating <italic>f</italic><sup>-&#x003B1;</sup> PSD using a finite number of poles &#x02013; a graphic description</bold>. Using partial fraction decomposition (Equation 15) <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>-&#x003B1;</sup> (blue) can be approximated (on a log&#x02013;log scale) in two distinct ways: <bold>(A)</bold> Using a sum of a real poles (green), with scaled amplitudes (approximating Equation 26) <bold>(B)</bold> Using a sum of complex poles (orange), with scaled &#x0201C;resonance peaks&#x0201D; (Equation 46). In this work we focus on the first case <bold>(A)</bold>, since it is simpler and requires far fewer poles.</p></caption>
<graphic xlink:href="fncom-08-00035-g0002.tif"/>
</fig>
</sec>
<sec>
<title>3.2.2. A minimal model for <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics</title>
<p>In the previous section we found general conditions under which Equation (13) gives <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>. In this section, we aim is to generate <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> over <italic>f</italic><sub>min</sub> &#x0003C; <italic>f</italic> &#x0003C; <italic>f</italic><sub>max</sub> in a minimal model, in which <italic>M</italic> (the dimension of <bold>s</bold>) is as small as possible. As explained in section 2.3.4 we do not lose any relevant generality if we restrict ourselves to the case where <bold>A</bold><sub>&#x0002A;</sub> is diagonal (Equation 18). From Equation (26), we know that |&#x003BB;<sub><italic>k</italic></sub>| must &#x0201C;cover&#x0201D; the frequency range <italic>f</italic><sub>min</sub> &#x0003C; <italic>f</italic> &#x0003C; <italic>f</italic><sub>max</sub>. In order for <italic>M</italic> to be small, we choose &#x003BB;<sub><italic>k</italic></sub> to be uniform over a logarithmic scale (similarly to Keshner, <xref ref-type="bibr" rid="B19">1982</xref>), so &#x003BB;<sub><italic>k</italic></sub> &#x0221D; &#x003F5;<sup><italic>k</italic></sup> with &#x003F5; &#x0003C; 1. The &#x0201C;simplest&#x0201D; way to achieve this is to have (see Equation 18)</p>
<disp-formula id="E32"><label>(27)</label><mml:math id="M35"><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003B3;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mtext>&#x02003;</mml:mtext><mml:mo>;</mml:mo><mml:mtext>&#x02003;</mml:mtext><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003B4;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula>
<p>so</p>
<disp-formula id="E33"><label>(28)</label><mml:math id="M36"><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>In order for &#x003BB;<sub><italic>k</italic></sub>/(2&#x003C0;) to cover the range [<italic>f</italic><sub>min</sub>,<italic>f</italic><sub>max</sub>] we require that</p>
<disp-formula id="E34"><label>(29)</label><mml:math id="M37"><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x0003E;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mtext>max</mml:mtext></mml:mrow></mml:msub><mml:mo>;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mrow><mml:mi>M</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0226A;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mtext>min</mml:mtext></mml:mrow></mml:msub><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Given <italic>M</italic>, this sets a constraint on &#x003F5;. In order to have scaling in &#x003C1;(&#x003BB;), as in Equation (26), we also require that <italic>c</italic><sub><italic>k</italic></sub> &#x0221D; |&#x003BB; |<sup>1&#x02212;&#x003B1;</sup> <italic>d</italic>&#x003BB; &#x0221D; &#x003F5;<sup>(2&#x02212;&#x003B1;)<italic>k</italic></sup>, since <italic>d</italic> &#x003BB; &#x0003D; &#x003BB;<sub><italic>k</italic></sub> &#x02212; &#x003BB;<sub><italic>k</italic> &#x02212; 1</sub> &#x0221D; &#x003F5;<sup><italic>k</italic></sup>. Therefore, from Equations (21) and (20) we have</p>
<disp-formula id="E35"><mml:math id="M38"><mml:mrow><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>w</mml:mi><mml:mi>k</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>&#x0221D;</mml:mo><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>so that <italic>S</italic><sup><italic>o</italic></sup><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>. Therefore, we require that <italic>w</italic><sub><italic>k</italic></sub> &#x0221D; &#x003F5;<sup>&#x02212;&#x003BC; <italic>k</italic></sup>, <italic>N</italic><sub><italic>s</italic>,<italic>k</italic></sub> &#x0221D; &#x003F5;<sup><italic>&#x003BD;k</italic></sup> with 2&#x003BC; &#x0002B; <italic>&#x003BD;</italic> &#x0003D; &#x003B1; &#x02212; 1. For &#x003BC; &#x0003E; 0 the slower processes (larger <italic>k</italic>) have larger weight. For <italic>&#x003BD;</italic> &#x0003E; 0 slower processes have a smaller number of ion channels (therefore, they are more &#x0201C;noisy&#x0201D;).</p>
<p>In Appendix A.4, we investigate what type of scaling will generate also <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>, taking into account the effects of feedback (through &#x003BA; (<italic>f</italic>)). We conclude that, because of the feedback, a value of &#x003BC; &#x0003E; 0 would not change the exponent of <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) over a &#x0201C;reasonable&#x0201D; range of parameters (i.e., assuming <italic>&#x003BD;</italic> &#x0003E; &#x02212;2). Therefore, the simplest way to generate <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> would be to take &#x003BC; &#x0003D; 0. In this case, we have (Equation 59), for &#x02212;1 &#x0003C; <italic>&#x003BD;</italic> &#x0003C; 1 and |&#x003BB;<sub><italic>M</italic></sub>| &#x0226A; 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub>1</sub>|,</p>
<disp-formula id="E36"><label>(30)</label><mml:math id="M39"><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x0221D;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mfrac><mml:mrow><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>&#x003BD;</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>ln</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mi>f</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where the logarithmic correction arises from the effect of feedback &#x003BA; (<italic>f</italic>). A few comments on Equation (30) are in order at this point.</p>
<list list-type="order">
<list-item><p>Due to the logarithmic correction, in order to approximate <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> it is a reasonable choice to set <italic>&#x003BD;</italic> slightly higher than &#x003B1;&#x02212;1, e.g.,</p> 
<p><disp-formula id="E37"><label>(31)</label><mml:math id="M40"><mml:mi>&#x003BD;</mml:mi><mml:mo>=</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>0.9.</mml:mn></mml:math></disp-formula></p></list-item>
<list-item><p>Even if there is no scaling in the parameters (i.e., &#x003BC; &#x0003D; <italic>&#x003BD;</italic> &#x0003D; 0), we obtain <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;1</sup> (neglecting logarithmic factors).</p></list-item>
<list-item><p>Equation (30) is based on asymptotic derivation, which is correct in two opposing limits (&#x0201C;sparse&#x0201D; or &#x0201C;dense&#x0201D; poles, Appendix A.5), indicating that these results are rather robust to parameter perturbations.</p></list-item>
<list-item><p>The magnitude of the ion channels number <italic>N</italic><sub><italic>s</italic>,1</sub> is inversely proportional to the magnitude of <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) (i.e., its proportionality constant), while the value <italic>w</italic><sub>1</sub> (the magnitude of the weights) does not affect <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>).</p></list-item>
<list-item><p>When <italic>N</italic><sub><italic>s</italic>,1</sub> &#x02192; &#x0221E; we have <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x02192; 0, implying that in the deterministic limit, such a CBM does not generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise (in accordance with our results from Soudry and Meir, <xref ref-type="bibr" rid="B45">2012</xref>).</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>3.3. The input&#x02013;output relation of the neuron</title>
<p>In the previous section we derived minimal biophysical constraints under which a neuron may generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics in response to periodic stimulation. In this section we explore the input&#x02013;output relation of the neuron under these constraints, in the case where the inter-stimulus intervals <italic>T</italic><sub><italic>m</italic></sub> form a general (sparse) random process. We decompose the neuronal response into contributions from its &#x0201C;long&#x0201D; history of internal fluctuations and its &#x0201C;short&#x0201D; history of inputs, quantifying neuronal memory.</p>
<sec>
<title>3.3.1. The linearized input&#x02013;output relation</title>
<p>Recall that <inline-formula><mml:math id="M41"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula><sub><italic>m</italic></sub> &#x0225C; <italic>T</italic><sub><italic>m</italic></sub> &#x02212; <italic>T</italic><sub>&#x0002A;</sub>, with <italic>T</italic><sub>&#x0002A;</sub> &#x0225C; &#x02329; <italic>T</italic><sub><italic>m</italic></sub> &#x0232A; and <italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) the PSD of <italic>T</italic><sub><italic>m</italic></sub> (Equation 2). As explained in Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>), for a <italic>general</italic> CBM<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref> we can decompose <italic>&#x00176;</italic><sub><italic>m</italic></sub>, the fluctuations in the neuronal response, to a linear sum of the history of the input and internal noise, i.e.,</p>
<disp-formula id="E38"><label>(32)</label><mml:math id="M42"><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>0</mml:mn></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:munderover><mml:mrow><mml:msubsup><mml:mi>h</mml:mi><mml:mi>k</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:mstyle><mml:msub><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>0</mml:mn></mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:munderover><mml:mrow><mml:msubsup><mml:mi>h</mml:mi><mml:mi>k</mml:mi><mml:mrow><mml:mtext>int</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:mstyle><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>with the filter <italic>h</italic><sup>ext</sup><sub><italic>k</italic></sub> used to integrate <italic>external</italic> fluctuations in the inputs, and the filter <italic>h</italic><sup>int</sup><sub><italic>k</italic></sub> used to integrate <italic>z</italic><sub><italic>m</italic></sub>, a zero mean and unit variance white noise representing <italic>internal</italic> fluctuations (e.g., ion channel noise). It is easier to analyze this I/O in the frequency domain, where Equation (32) becomes (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 20)</p>
<disp-formula id="E39"><label>(33)</label><mml:math id="M43"><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>int</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mi>z</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where we define <italic>X</italic>(<italic>f</italic>) to be the Fourier transform of <italic>X</italic>(<italic>t</italic>). Together, <italic>H</italic><sup>ext</sup>(<italic>f</italic>) and <italic>H</italic><sup>int</sup>(<italic>f</italic>) describe the <inline-formula><mml:math id="M44"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula><sub><italic>m</italic></sub> &#x02192; <italic>&#x00176;</italic><sub><italic>m</italic></sub> neuronal I/O at very long timescales.</p>
<p>Note that these filters are related to the PSDs, in the following way</p>
<disp-formula id="E40"><label>(34)</label><mml:math id="M45"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>Y</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>f</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>T</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
<disp-formula id="E41"><label>(35)</label><mml:math id="M46"><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:msub><mml:mi>S</mml:mi><mml:mi>T</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>int</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math></disp-formula>
<p>where we recall that <italic>S</italic><sub><italic>YT</italic></sub> (<italic>f</italic>) is the cross-PSD (Equation 3). Notably, from Equation (35), if the input to the neuron is not periodical (so, <italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) &#x02260; 0), then the PSD <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) should be the same as calculated previously, except for the addition of |<italic>H</italic><sup>ext</sup>(<italic>f</italic>)|<sup>2</sup><italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>).</p>
</sec>
<sec>
<title>3.3.2. The shape of the input&#x02013;output filters</title>
<p>For a general CBM, we can derive semi-analytically the exact form of the filters in Equation (33) from its parameters, as we did for <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>). For example, if <inline-formula><mml:math id="M47"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula><sub><italic>m</italic></sub> &#x0003D; 0 (periodical input), then also <italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) &#x0003D; 0, and so</p>
<disp-formula id="E42"><label>(36)</label><mml:math id="M48"><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mrow><mml:mtext>int</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) is the PSD we derived previously (Equation 10). Additionally, we obtain (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>, Equation 17)</p>
<disp-formula id="E43"><label>(37)</label><mml:math id="M49"><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mo>&#x022A4;</mml:mo></mml:msup><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>H</mml:mi></mml:mstyle><mml:mi>c</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>d</mml:mi></mml:mstyle><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>with</p>
<disp-formula id="E44"><label>(38)</label><mml:math id="M50"><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>d</mml:mi></mml:mstyle><mml:mo>&#x0225C;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>A</mml:mi></mml:mstyle><mml:mn>0</mml:mn></mml:msub><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>s</mml:mi></mml:mstyle><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>b</mml:mi></mml:mstyle><mml:mn>0</mml:mn></mml:msub><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Next, we find both filters for the minimal model described in section 3.2.2. Recall that in this model</p>
<disp-formula id="E45"><label>(39)</label><mml:math id="M51"><mml:msub><mml:mi>w</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>;</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>&#x0221D;</mml:mo><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mi>k</mml:mi></mml:msup><mml:mo>;</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>&#x0221D;</mml:mo><mml:msup><mml:mi>&#x003F5;</mml:mi><mml:mi>k</mml:mi></mml:msup></mml:math></disp-formula>
<p>with <italic>a</italic><sub>1</sub> and <italic>d</italic><sub>1</sub>, respectively given by Equations (8) and (38). To simplify analysis, we derive an asymptotic form for both filters, for the cases |&#x003BB;<sub><italic>M</italic></sub>| &#x0226A; 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub>1</sub>| and 2&#x003C0; <italic>f</italic> &#x0226B; |&#x003BB;<sub>1</sub>|. First, from Equation (36), and Equation (59), we find</p>
<disp-formula id="E46"><label>(40)</label><mml:math id="M52"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mrow><mml:mtext>int</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>~</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>/</mml:mo><mml:mi>ln</mml:mi><mml:mi>f</mml:mi><mml:mo>,</mml:mo><mml:mtext>if</mml:mtext><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x0226A;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mo>&#x0226A;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mtext>constant</mml:mtext><mml:mo>,</mml:mo><mml:mtext>if</mml:mtext><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mo>&#x0226B;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>&#x003BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow> </mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Similarly, from Equation (37), we find (Appendix A.6) that for the minimal model the interpolation between the two asymptotic cases is monotonic, so we can approximate</p>
<disp-formula id="E47"><label>(41)</label><mml:math id="M53"><mml:mrow><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02248;</mml:mo><mml:mfrac><mml:mrow><mml:mi>q</mml:mi><mml:msub><mml:mi>d</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>q</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>q</italic> &#x0225C; (1 &#x02212; &#x003F5;)<sup>&#x02212;1</sup><italic>T</italic><sup>&#x02212;1</sup><sub>&#x0002A;</sub><italic>w</italic><sub>1</sub>. A few comments on Equations (40, 41) are in order at this point.</p>
<list list-type="order">
<list-item><p>We found that <italic>H</italic><sup>ext</sup>(<italic>f</italic>) is a low pass filter with a pole at <italic>f</italic><sub>ext</sub> &#x0003D; <italic>qa</italic><sub>1</sub>/2&#x003C0; while <italic>H</italic><sup>int</sup>(<italic>f</italic>) &#x0007E; <italic>f</italic><sup>&#x02212;&#x003B1;/2</sup> for 2&#x003C0; <italic>f</italic> &#x0226A; |&#x003BB;<sub>1</sub>|. Consequently, in the temporal domain (Equation 32), for large <italic>t</italic> (i.e., large <italic>k</italic>), the neuron&#x00027;s memory of its external input decays exponentially (<italic>h</italic><sup>ext</sup><sub><italic>k</italic></sub> &#x0007E; <italic>e</italic><sup>&#x02212;<italic>f</italic></sup><sub>1<italic>T</italic><sub>&#x0002A;</sub><italic>k</italic></sub>), while its memory of its internal fluctuations decays as a power law (<italic>h</italic><sup>int</sup><sub><italic>k</italic></sub> &#x0007E; <italic>k</italic><sup>&#x02212;(1&#x02212;&#x003B1;/2)</sup>). Therefore, the input memory has a finite timescale (equal to <italic>f</italic><sup>&#x02212;1</sup><sub>ext</sub>), while the memory of internal fluctuations is &#x0201C;long&#x0201D; (with a cutoff only near <italic>f</italic><sup>&#x02212;1</sup><sub>min</sub>).</p></list-item>
<list-item><p>It is perhaps surprising that Equation (37), which has multiple poles, becomes a low pass filter with a single pole <italic>f</italic><sub>1</sub>. The derivation (Appendix A.6) gives two main reasons for this. First, the scaling of <italic>w</italic><sub><italic>k</italic></sub> and <italic>d</italic><sub><italic>k</italic></sub> in Equation (39) induces only a weak (logarithmic) scaling of the poles in open-loop. Second, even this weak scaling is canceled by the effects of the feedback.</p></list-item>
<list-item><p>Naturally, other models may have a different shape of <italic>H</italic><sup>ext</sup>(<italic>f</italic>). This could be probed directly, as we explain later, in section 3.4.3.</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>3.4. Modeling experimental results</title>
<p>In this section we apply our results to experimental data, described in section 3.4.1. In section 3.4.2 we implement the set of &#x0201C;minimal constraints&#x0201D; we found in section 3.2.2 in a specific CBM, and fit it to experimental data in which <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>. The analytical results in section 3.2 suggest that this specific CBM is a &#x0201C;reasonable&#x0201D; representative of the family of CBMs that can generate the experimental results. Other members of this family can be reached by varying the parameters within the (either minimal or general) constraints. Next, in section 3.4.3 we use our results from section 3.3.2 on the fitted model. We show that, although internal fluctuations in the model can affect the neural response on a timescale of days, the memory of the input is only retained for a duration of minutes. We suggest specific experiments to test this prediction. In section 3.4.4 we suggest further predictions</p>
<sec>
<title>3.4.1. Experimental details</title>
<p>The experiment from Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>), where a single <italic>synaptically isolated</italic> neuron, residing in a culture of rat cortical neurons, is stimulated periodically with a train of extracellular short current pulses with constant amplitude <italic>I</italic><sub>0</sub>. The observed neuronal response was characterized by different modes (Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>, Figure <xref ref-type="fig" rid="F2">2</xref>). We focus on the &#x0201C;intermittent mode&#x0201D; steady state, in which 0 &#x0003C; <italic>p</italic><sub>&#x0002A;</sub> &#x0003C; 1 (i.e., sometimes the stimulation evokes an AP, and sometimes it does not). The patterns observed in <italic>Y</italic><sub><italic>m</italic></sub>, the AP occurrences timeseries, are rather irregular (Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>, Figure <xref ref-type="fig" rid="F2">2E</xref>), span multiple timescales (Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>, Figure <xref ref-type="fig" rid="F5">5</xref>) and variable (i.e., patterns are not repeatable Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>, Figure 9A). More quantitatively, as indicated by the analysis (Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>, Figure 6), for <italic>all</italic> intermittently firing neurons, the patterns in <italic>Y</italic><sub><italic>m</italic></sub> fall into the category of &#x0201C;<italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise&#x0201D; where the value of &#x003B1; varied significantly between neurons &#x02013; with</p>
<disp-formula id="E48"><label>(42)</label><mml:math id="M54"><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>=</mml:mo><mml:mn>1.43</mml:mn><mml:mo>&#x000B1;</mml:mo><mml:mn>0.35</mml:mn></mml:mrow></mml:math></disp-formula>
<p>(mean &#x000B1; <italic>SD</italic>). As we explained in section 3.1, this <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior is true only in some limited range <italic>f</italic><sub>min</sub> &#x0003C; <italic>f</italic> &#x0003C; <italic>f</italic><sub>max</sub>. From the experimental data, (Figure 6C in Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>) it can be estimated that <italic>f</italic><sub>min</sub> &#x0003C; 10<sup>&#x02212;5</sup>Hz and <italic>f</italic><sub>max</sub> &#x0007E; 10<sup>&#x02212;2</sup>Hz. Also, since &#x003B1; &#x0003E; 1, then 0 &#x0003C; <italic>f</italic><sub>min</sub> (see section 3.1).</p>
</sec>
<sec>
<title>3.4.2. The HHMS model &#x02013; a biophysical implementation of the minimal constraints</title>
<p>In our previous work (Soudry and Meir, <xref ref-type="bibr" rid="B45">2012</xref>) we already fitted a model that fits many of the &#x0201C;mean&#x0201D; properties of the neuronal response (e.g., firing modes, transients and firing rate). This model is an extension of the original Hodgkin&#x02013;Huxley model which includes Slow sodium inactivation (Chandler and Meves, <xref ref-type="bibr" rid="B4">1970</xref>; Fleidervish et al., <xref ref-type="bibr" rid="B8">1996</xref>) (The HHS model, see Appendix C.1). In order to maintain this fit with the experimental results, we extend the HHS model with additional slow components, obeying Equation (18). We denote this as the HHMS model (Hodgkin Huxley model with Many Slow variables, Appendix C.2). The equations are identical to the HHS model, except that in the voltage equation (Equation 73) <italic><overline>g</overline></italic><sub><italic>Na</italic></sub><italic>s</italic> is replaced by <italic><overline>g</overline></italic><sub><italic>Na</italic></sub><italic>M</italic><sup>&#x02212;1</sup> <inline-formula><mml:math id="M55"><mml:mrow><mml:mstyle displaystyle='true'><mml:msubsup><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:msubsup><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle></mml:mrow></mml:math></inline-formula>, where <italic>s</italic><sub>1</sub> has the same equation as <italic>s</italic> in the HHS model (Equation 77). By symmetry, this gives identical weights to component <italic>s</italic><sub><italic>i</italic></sub> (i.e., &#x02200;<italic>k</italic>: <italic>w</italic><sub><italic>k</italic></sub> &#x0003D; <italic>w</italic><sub>1</sub>). The remaining rates (for <italic>k</italic> &#x02265; 2) are chosen according to our constraints, so &#x003B3;<sub><italic>k</italic></sub>(<italic>V</italic>) &#x0003D; &#x003B3;(<italic>V</italic>) &#x003F5;<sup><italic>k</italic> &#x02212; 1</sup>, &#x003B4;<sub><italic>k</italic></sub>(<italic>V</italic>) &#x0003D; &#x003B4;(<italic>V</italic>) &#x003F5;<sup><italic>k</italic> &#x02212; 1</sup> (as in Equation 27), where &#x003B3;(<italic>V</italic>) and &#x003B4;(<italic>V</italic>) are taken from the HHS model (Equation 77) and also <italic>N</italic><sub><italic>s</italic>,<italic>k</italic></sub> &#x0003D; <italic>N</italic><sub><italic>s</italic></sub> &#x003F5;<sup><italic>&#x003BD; k</italic></sup>. Therefore, the only free parameters are &#x003F5;, <italic>M</italic>, <italic>N</italic><sub><italic>s</italic></sub>, <italic>&#x003BD;</italic> and <italic>I</italic><sub>0</sub> (<italic>I</italic><sub>0</sub> is the current amplitude of the stimulation pulses).</p>
<p>This model can be used to fit the experimental results for any &#x003B1; &#x02208; [0,2). We performed a numerical simulation of the full equations (Equations 4&#x02013;6) of the HHMS model under periodical stimulation with <italic>T</italic><sub>&#x0002A;</sub> &#x0003D; 50 ms. We aimed to fit an experiment from Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>), which had a similar stimulation and exhibited <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0007E; <italic>f</italic><sup>&#x02212;&#x003B1;</sup>, with &#x003B1; &#x0003D; 1.4 (which is approximately the average &#x003B1; value measured in Gal et al., <xref ref-type="bibr" rid="B9">2010</xref>). The current amplitude <italic>I</italic><sub>0</sub> was set to <italic>I</italic><sub>0</sub> &#x0003D; 7.7 &#x003BC; <italic>A</italic> so that the model would have the same mean response probability <italic>p</italic><sub>&#x0002A;</sub> &#x02248; 0.4 as in the experimental data (using the self consistent equations for <italic>p</italic><sub>&#x0002A;</sub> from Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>). We chose <italic>M</italic> &#x0003D; 5 and &#x003F5; &#x0003D; 0.2 in order to satisfy constraint Equation (29) with a minimal <italic>M</italic>. We chose <italic>&#x003BD;</italic> &#x0003D; 0.5 to satisfy Equation (31). Lastly, we chose <italic>N</italic><sub><italic>s</italic></sub> &#x0003D; 10<sup>4</sup> in order to fit the magnitude of the <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>). This reproduced all the scaling relations observed experimentally (Figure <xref ref-type="fig" rid="F3">3</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>The measures of &#x0201C;scale free&#x0201D; rate dynamics in the HHMS model &#x02013; comparison of the experimental data from Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>) and a simulation of the extended HHS model (solid and dashed lines, respectively)</bold>. We use here the same measures as in Figure 6 in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>): <bold>(A)</bold> The firing rate fluctuations estimated using bins of different sizes (<italic>T</italic> &#x0003D; 10, 30, 100, and 300 s) and plotted on a normalized time axis (units in number of bins), after subtracting the mean of each series. <bold>(B)</bold> CV of the bin counts, as a function of bin size, plotted on a log-linear axis. <bold>(C)</bold> Firing rate periodogram. <bold>(D)</bold> Detrended fluctuations analysis. <bold>(E)</bold> Fano factor (FF) curve. <bold>(<italic>f</italic>)</bold> Allan factor (AF) curve. <bold>(G)</bold> Length distribution of spike&#x02013;response sequences, on a half-logarithmic axes. <bold>(H)</bold> Length distribution of no-spike-response sequences, on a double-logarithmic axes. For additional details on measures used see Appendix B.1.</p></caption>
<graphic xlink:href="fncom-08-00035-g0003.tif"/>
</fig>
</sec>
<sec>
<title>3.4.3. Predictions &#x02013; probing the input&#x02013;output relation of the neuron</title>
<p>After fitting the HHMS model to the experimental results, we can examine its resulting linearized input&#x02013;output relation, described by the filters <italic>H</italic><sup>ext</sup>(<italic>f</italic>) and <italic>H</italic><sup>int</sup>(<italic>f</italic>) (Equation 33). The <italic>H</italic><sup>int</sup>(<italic>f</italic>) filter integrates internal fluctuations, while the <italic>H</italic><sup>ext</sup>(<italic>f</italic>) filter determines how external fluctuations (in the input) affect its response.</p>
<p>In accordance with the asymptotic forms in Equations (40) and (41), we find that <italic>H</italic><sup>ext</sup>(<italic>f</italic>) is a low pass filter with a pole <italic>f</italic><sub>ext</sub> &#x0007E; 10<sup>&#x02212;2</sup> Hz (Figure <xref ref-type="fig" rid="F4">4</xref>, green) while <italic>H</italic><sup>int</sup>(<italic>f</italic>) &#x0007E; <italic>f</italic><sup>&#x02212;&#x003B1;/2</sup> for <italic>f</italic><sub>min</sub> &#x0003C; <italic>f</italic> &#x0003C; 10<sup>&#x02212;2</sup> Hz (Figure <xref ref-type="fig" rid="F4">4</xref>, red) with <italic>f</italic><sub>min</sub> &#x0003C; 10<sup>&#x02212;5</sup> Hz. Therefore, as explained in section 3.3.2 this model implies that the response of the neuron is affected by internal fluctuations over the scale of days (&#x0007E; <italic>f</italic><sup>&#x02212;1</sup><sub>min</sub>) or more, generating the <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior we observe in Figure <xref ref-type="fig" rid="F3">3</xref>. However, external input is remembered only for minutes (&#x0007E; <italic>f</italic><sup>&#x02212;1</sup><sub>ext</sub>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>System decomposition into external (input &#x02013; <inline-formula><mml:math id="M56"><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover></mml:math></inline-formula>(<italic>f</italic>)) and internal (fluctuation &#x02013; <italic>z</italic>(<italic>f</italic>)) filters</bold>. For a fitted HHMS model, <italic>H</italic><sup>ext</sup>(<italic>f</italic>) is a low pass filter with cutoff &#x0003C;10<sup>&#x02212;2</sup> Hz while <italic>H</italic><sub>int</sub> (<italic>f</italic>) &#x0007E; <italic>f</italic><sup>-&#x003B1;/2</sup> for <italic>f</italic> &#x0003C; 10<sup>&#x02212;2</sup> Hz.</p></caption>
<graphic xlink:href="fncom-08-00035-g0004.tif"/>
</fig>
<p>Next, we examine two methods which allow us to probe <italic>H</italic><sup>ext</sup>(<italic>f</italic>) directly and examine these predictions.</p>
<p>First, a simple method to probe the external input filter <italic>H</italic><sup>ext</sup>(<italic>f</italic>) is through Equation (34). Allowing reliable estimation of <italic>H</italic><sup>ext</sup>(<italic>f</italic>) in a certain frequency range requires a random process stimulation for which |<italic>H</italic><sup>ext</sup> (<italic>f</italic>) |<sup>2</sup><italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) &#x0226B; |<italic>H</italic><sup>int</sup>(<italic>f</italic>)| in that range, as explained in Appendix B.2. To demonstrate this method we estimate <italic>S</italic><sub><italic>TY</italic></sub> (<italic>f</italic>) from the existing experimental data taken from Gal and Marom (<xref ref-type="bibr" rid="B10">2013</xref>), in which <italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) &#x0007E; <italic>f</italic><sup>&#x02212;&#x003B2;</sup> (above some lower cutoff). In Figure <xref ref-type="fig" rid="F5">5A</xref> we compare this estimation with <italic>S</italic><sub><italic>TY</italic></sub> (<italic>f</italic>) in the HHMS model in a limited range where <italic>S</italic><sub><italic>T</italic></sub> (<italic>f</italic>) is sufficiently high for estimation to be accurate. It is similar to <italic>S</italic><sub><italic>TY</italic></sub> (<italic>f</italic>) from our fitted HHMS model, validating our estimate of <italic>H</italic><sup>ext</sup>(<italic>f</italic>) for that frequency range.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Input memory in fitted model</bold>. <bold>(A)</bold> Comparison of |<italic>S</italic><sub><italic>YT</italic></sub> (<italic>f</italic>)| of the fitted model (&#x0201C;Model&#x0201D;) to that estimated from the experimental confirms (&#x0201C;Experiment&#x0201D;) the prediction of the input filter |<italic>H</italic><sup>ext</sup>(<italic>f</italic>)| for probed range. <bold>(B)</bold> This filter (&#x0201C;Approx&#x0201D;) can be probed more accurately by peaks of <italic>&#x00176;</italic> (<italic>f</italic>) (&#x0201C;Simulation&#x0201D;), by applying a &#x0201C;sum of sines&#x0201D; input (Equation 43).</p></caption>
<graphic xlink:href="fncom-08-00035-g0005.tif"/>
</fig>
<p>Second, The filter <italic>H</italic><sup>ext</sup>(<italic>f</italic>) could be probed more accurately and at lower frequencies &#x02013; by sinusoidally modulating the input (the internal-stimulus intervals), analogously to the sinusoidally modulated input current used in Lundstrom et al. (<xref ref-type="bibr" rid="B24">2008</xref>, <xref ref-type="bibr" rid="B23">2010</xref>) and Pozzorini et al. (<xref ref-type="bibr" rid="B36">2013</xref>),</p>
<disp-formula id="E49"><label>(43)</label><mml:math id="M57"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>T</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mtext>amp</mml:mtext></mml:mrow></mml:msub><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>L</mml:mi></mml:munderover><mml:mrow><mml:mi>sin</mml:mi></mml:mrow></mml:mstyle><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>l</mml:mi></mml:msub><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mi>m</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As we explain in Appendix B.3, in this case the output of the neuron would be</p>
<disp-formula id="E50"><mml:math id="M58"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>Y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>m</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>L</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mtext>amp</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mstyle><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mi>sin</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>l</mml:mi></mml:msub><mml:msub><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mo>&#x02220;</mml:mo><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mmultiscripts><mml:mtext>n</mml:mtext><mml:mprescripts/><mml:none/><mml:mo>&#x00022;</mml:mo></mml:mmultiscripts><mml:mtext>oise</mml:mtext><mml:mo>.</mml:mo><mml:mmultiscripts><mml:mrow></mml:mrow><mml:mprescripts/><mml:none/><mml:mo>&#x00022;</mml:mo></mml:mmultiscripts></mml:mrow></mml:math></disp-formula>
<p>This allows us to easily estimate |<italic>H</italic><sup>ext</sup>(<italic>f</italic>)| using the peaks of <italic>T</italic><sub>amp</sub><sup>&#x02212;1</sup> <italic>&#x00176;</italic> (<italic>f</italic>) (the Fourier transform of <italic>T</italic><sup>&#x02212;1</sup><sub>amp</sub> <italic>&#x00176;</italic><sub><italic>m</italic></sub>) at frequencies <italic>f</italic><sub><italic>l</italic></sub>, as we demonstrate in Figure <xref ref-type="fig" rid="F5">5B</xref>, using our fitted HHMS model.</p>
</sec>
<sec>
<title>3.4.4. Additional predictions</title>
<p>As explained in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>) and Soudry and Meir (<xref ref-type="bibr" rid="B45">2012</xref>) the latency of the AP can serve as an indicator of the cell&#x00027;s excitability. Specifically, this is true in the HHMS model, for periodical stimulus and <italic>p</italic><sub>&#x0002A;</sub> &#x0003D; 1, where the PSD of the latency, <italic>S</italic><sub><italic>L</italic></sub> (<italic>f</italic>), is a shifted and scaled version of <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) with <italic>p</italic><sub>&#x0002A;</sub> &#x02192; 1 (see section 4.4.6 in Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>). Therefore, in the HHMS model we also have <italic>S</italic><sub><italic>L</italic></sub> (<italic>f</italic>) &#x0221D; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> approximately (neglecting logarithmic factors).</p>
<p>Next, suppose we vary some measurable stimulation parameter, such as the mean stimulation rate <italic>T</italic><sup>&#x02212;1</sup><sub>&#x0002A;</sub>. How would this affect the shape of the filters we derived? The analytical results allow us to calculate this explicitly in the HHMS model.</p>
<p>First, we consider the gain of the external input filter <italic>H</italic><sup>ext</sup>(<italic>f</italic>) (i.e., <italic>H</italic><sup>ext</sup>(0)). As we explain in Appendix A.7, if <italic>f</italic> &#x0226A; <italic>f</italic><sub>cutoff</sub>, than</p>
<disp-formula id="E51"><label>(44)</label><mml:math id="M59"><mml:mrow><mml:msup><mml:mi>H</mml:mi><mml:mrow><mml:mtext>ext</mml:mtext></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>f</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02248;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mo>&#x02217;</mml:mo></mml:msub><mml:msubsup><mml:mi>T</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mover accent='true'><mml:mi>f</mml:mi><mml:mo>&#x000AF;</mml:mo></mml:mover><mml:mrow><mml:mtext>out</mml:mtext></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>which is the mean firing rate of the neuron &#x02013; a simply measurable quantity.</p>
<p>Second, how would <italic>H</italic><sub>int</sub> (<italic>f</italic>) change if <italic>T</italic><sub>&#x0002A;</sub> is varied? Since <italic>H</italic><sub>int</sub> (<italic>f</italic>) is directly measurable only through <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) (Equation 36), we consider <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) instead. From Equation (59) it is clear that if <italic>S</italic><sub><italic>Y</italic></sub> (<italic>f</italic>) &#x0007E; <italic>f</italic><sup>&#x02212;&#x003B1;</sup> approximately at low frequencies then the exponent &#x003B1; should not depend much on any external parameter (assuming 0 &#x0003C; <italic>p</italic><sub>&#x0002A;</sub> &#x0003C; 1). This was observed experimentally when the stimulation rate (<italic>T</italic><sup>&#x02212;1</sup><sub>&#x0002A;</sub>) was varied, as can be seen in Figure 1G in Gal and Marom (<xref ref-type="bibr" rid="B10a">2013</xref>).</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<sec>
<title>4.1. Generating <italic>f</italic><sup>&#x02212;&#x003B1;</sup> PSD</title>
<p>In this work we aim to explain biophysically the phenomenon of <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior in the response of isolated neurons, and explore its implications on the input&#x02013;output relation of the neuron. We do this under a regime of sparse stimulation (Figure <xref ref-type="fig" rid="F1">1</xref>), and in the biophysical framework of stochastic conductance-based models (CBMs, Equations 4&#x02013;6). In this setting our analytical results (Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>) can be used to derive a closed form expression for the Power Spectral Density (PSD, Equation 10) based on the parameters of the slow kinetics in the CBM. This PSD is affected by the closed-loop interaction &#x02013; the slow dynamics affect the AP response, which, in turn, feeds back and affects the kinetics of the slow processes (section 2.3.2). Moreover, the contribution of each slow process to the PSD can be exactly quantified (section 2.3.3), as we demonstrate using a simple model (section 2.3.4).</p>
<p>These mathematical results expose the large parameter degeneracy of CBMs (Marder and Goaillard, <xref ref-type="bibr" rid="B25">2006</xref>; Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>), i.e., that many &#x0201C;different&#x0201D; models will quantitatively produce the same behavior. Due to the degeneracy of CBMs, we first aimed to derive rather general sufficient conditions for the generation of <italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise in a CBM (section 3.2.1). These conditions indicate which types of CBMs can generate the observed behavior. We show that, in order to generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> behavior, neurons should have intrinsic fluctuations (e.g., due to ion channel noise), and have a number of slow processes with a large range of timescales, &#x0201C;covering&#x0201D; the entire range over which <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics is observed. Furthermore, the parameters of these processes must be scaled in a certain way in order to generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise with a specific &#x003B1; (Equation 26).</p>
<p>We implement these constraints in a minimal CBM (section 3.2.2), in which the slow processes are uncoupled, except through the voltage, as in Soudry and Meir (<xref ref-type="bibr" rid="B45">2012</xref>). Initially, we find that the specific scaling relation can be achieved either by scaling the (1) conductances or (2) the ion channel numbers. This scaling implies that slower processes will be either (1) &#x0201C;stronger&#x0201D; or (2) &#x0201C;noisier.&#x0201D; However, the &#x0201C;feedback&#x0201D; effect in the model (the slow process being affected by the APs) prevents <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics from being generated in case (1). In contrast, option (2) can robustly generate the observed <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics in the neuronal response for any 0 &#x0003C; &#x003B1; &#x0003C; 2 (Equation 30 and Figure 6).</p>
<p>Naturally, outside of the framework of CBMs (Equations 4&#x02013;6) long term correlations may be modeled differently, since there are numerous distinct ways to generate power law distributions (Newman, <xref ref-type="bibr" rid="B29">2005</xref>). For example, as numerically demonstrated in Gal and Marom (<xref ref-type="bibr" rid="B10a">2013</xref>), 1/f statistics in neuronal firing patterns can be generated by assuming global (cooperative) interactions between ion channels (i.e., not through the voltage). Biophysically, the significance of interactions between ion channels is still not clear (Naundorf et al., <xref ref-type="bibr" rid="B28">2006</xref> and Brief Communications arising), but other cellular processes that might affect excitability on slower timescales clearly exhibit interactions (e.g., gene regulation networks Davidson and Levin, <xref ref-type="bibr" rid="B5">2005</xref>). Mathematically, such interactions render the slow dynamics (Equation 6) non-linear at constant voltage (Gillespie, <xref ref-type="bibr" rid="B13">2000</xref>). It would be interesting to generalize the theory we presented here in order to understand how to tune the PSD in such a non-linear setting, since this has the potential to further reduce the number of parameters and model complexity.</p>
</sec>
<sec>
<title>4.2. Biophysical implementation</title>
<p>We examine our theoretical predictions numerically. We do this using a stochastic Hodgkin Huxley type model with slow sodium inactivation that was previously fitted to the basic experimental results (Soudry and Meir, <xref ref-type="bibr" rid="B45">2012</xref>). We extend this model to include four additional slow processes, which resemble slow sodium inactivation (Appendix C.2). The only difference is that each process is slower than the previous one, and has a lower number of ion channels, according to the specific scaling relation that was derived. The resulting model indeed generates <italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise, and is demonstrated numerically (Figure <xref ref-type="fig" rid="F3">3</xref>) to fit the experimental results of Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>). This is the first time, to our knowledge, that a cortical neuron model (either biophysical or phenomenological) reproduces experimental results over such long timescales. Notably, without the analytical results, it would be hard to tune the parameters of a biophysical neuron, due to the large number of unknown parameters.</p>
<p>Previous works (Lowen et al., <xref ref-type="bibr" rid="B21">1999</xref>; Soen and Braun, <xref ref-type="bibr" rid="B42">2000</xref>) demonstrated numerically that, even with constant current stimulation, incorporating slow processes into an excitable cell model can generate <italic>f</italic><sup>&#x02212;&#x003B1;</sup> in its response. In Lowen et al. (<xref ref-type="bibr" rid="B21">1999</xref>) a HH model was extended to include multiple slow processes with scaled rates in the potassium channel produced <italic>f</italic><sup>&#x02212;&#x003B1;</sup> firing rate response. Their model produced an exponent of &#x003B1; &#x02248; 0.5, replicating experiments measurements from the auditory nerves. Another work (Soen and Braun, <xref ref-type="bibr" rid="B42">2000</xref>), aiming to reproduce the activity of heart cells, produced long term correlations with &#x003B1; &#x02248; 1.6 &#x02212; 2 using a reflected diffusion process.</p>
<p>The identity of the specific slow processes involved in generating <italic>f</italic><sup>&#x02212;&#x003B1;</sup> remains a mystery at this point, since there are many possible mechanisms which can modulate the excitability of the cell in such long timescales. For example, ion channel numbers, conductances and kinetics are constantly being regulated and may change over time (e.g., Levitan, <xref ref-type="bibr" rid="B20">1994</xref>; Staub et al., <xref ref-type="bibr" rid="B46">1997</xref>). Also, the ionic concentrations in the cell depend on the activity of the ionic pumps, which can be affected by metabolism (Silver et al., <xref ref-type="bibr" rid="B40">1997</xref>). Finally, the spike initiation region can significantly shift its location with time (e.g., 17 &#x003BC;m distally during 48 h of high activity Grubb and Burrone, <xref ref-type="bibr" rid="B15">2010</xref>), and so can cellular neurites (Paola et al., <xref ref-type="bibr" rid="B34">2006</xref>; Nishiyama et al., <xref ref-type="bibr" rid="B30">2007</xref>). Only after such slow processes are quantitatively characterized, can we determine their effect on the neuron&#x00027;s excitability at long timescales.</p>
</sec>
<sec>
<title>4.3. The input&#x02013;output relation</title>
<p>The linearized input&#x02013;output relation of the fitted CBM was derived using the methods described in Soudry and Meir (<xref ref-type="bibr" rid="B43">2014</xref>). This linearized relation decomposes the contributions of external inputs and internal fluctuations to the response of the neuron. This decomposition (Equation 33) shows that even though the neuron can &#x0201C;remember&#x0201D; its intrinsic fluctuations over timescales of days, its memory of past pulse inputs can be limited to a shorter timescale of &#x0007E;10<sup>2</sup> s (Figure <xref ref-type="fig" rid="F4">4</xref>). Notably, the neuron has this limited memory for such inputs even though processes on much slower timescales exist in the model.</p>
<p>In the introduction we mentioned previous works (Lundstrom et al., <xref ref-type="bibr" rid="B24">2008</xref>, <xref ref-type="bibr" rid="B23">2010</xref>; Pozzorini et al., <xref ref-type="bibr" rid="B36">2013</xref>) which also described the temporal integration in the neuron using power-law filters, although in a rather different (non-sparse) stimulation regime. Our fitted model indicates that similar power-law integration still occurs at very long timescales. However, it is not the input that is being integrated, but the internal fluctuations in the neuron, and this is what drives the <italic>f</italic><sup>&#x02212;&#x003B1;</sup> statistics measured by Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>). Also, as in Lundstrom et al. (<xref ref-type="bibr" rid="B24">2008</xref>, <xref ref-type="bibr" rid="B23">2010</xref>) and Pozzorini et al. (<xref ref-type="bibr" rid="B36">2013</xref>), the neuronal response in our model is indeed affected by the last 10 s of its external inputs. However, our model suggests the response will not be significantly affected by spike perturbations in its input that occurred more than 10<sup>2</sup> s ago.</p>
<p>Qualitatively, this specific timescale of the input memory stems from the &#x0201C;fastest slow negative feedback process&#x0201D; in the model (in this specific model, slow sodium inactivation). This process responds to perturbations in the input which change the firing rate much more quickly than all the other slow processes. Its response to perturbation brings the firing rate back to its steady state, before slower processes even register that the firing rate has changed. Therefore, effectively, these slower processes do not store much information about input perturbations. We suggest experiments to test input memory directly, by using <italic>f</italic><sup>&#x02212;&#x003B1;</sup> stimulation (Figure <xref ref-type="fig" rid="F5">5A</xref>), &#x0201C;sum of sines&#x0201D; stimulation (Figure <xref ref-type="fig" rid="F5">5B</xref>) and a variation of the mean stimulation rate (Equation 44 and Supplementary Figure <xref ref-type="supplementary-material" rid="SM1">2</xref>).</p>
</sec>
<sec>
<title>4.4. Significance</title>
<p>This work makes several practical contributions. First, our results impose specific constraints on the slow processes that modulate the excitability on very long timescales (e.g., a ratio between timescales and channel numbers). Such constraints facilitate the construction of neuronal models with &#x0201C;realistic&#x0201D; input&#x02013;output relations over extended timescales. Hopefully, these constraints may also help to identify the relevant slow biophysical processes. Second, our results suggest that for sparse spiking inputs, the memory of a cortical neuron stretches back to the last minute of its input, but not much more. This limit could be especially relevant when fitting statistical models to neuronal data, and for setting limitations on neuronal computations.</p>
<p>As for the functional significance, it is still not clear why the neuronal response fluctuates so wildly, especially at long timescales. We end this paper by offering some speculations on this issue. We see three possible scenarios. One possibility is that these fluctuations are beneficial. For example, such non-stationary fluctuations should increase network heterogeneity, which may be advantageous (Tessone et al., <xref ref-type="bibr" rid="B47">2006</xref>; Padmanabhan and Urban, <xref ref-type="bibr" rid="B33">2010</xref>). Another possibility is that these fluctuations do not affect neural response when the neuron is connected within a network. For example, this could be due to network feedback canceling slow changes in excitability. Finally, it is possible that such slow fluctuations are deleterious, an unavoidable &#x0201C;noise&#x0201D; generated by the non-stationary environment of the cell. Interestingly, <italic>f</italic><sup>&#x02212;&#x003B1;</sup> noise imposes important constraints on electronic circuits, and was predicted to impose similar constraints on neural circuits (Sarpeshkar, <xref ref-type="bibr" rid="B39">1998</xref>).</p>
</sec>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<sec sec-type="supplementary material" id="s5">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fncom.2014.00035/abstract">http://www.frontiersin.org/journal/10.3389/fncom.2014.00035/abstract</ext-link></p>
<supplementary-material xlink:href="Presentation1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ack>
<p>The authors are grateful to O. Barak, N. Brenner, Y. Elhanati, A. Gal, T. Knafo, Y. Kafri, S. Marom, J. Schiller, and M. Ziv for insightful discussions and for reviewing parts of this manuscript. The authors are also grateful to A. Gal and S. Marom for supplying the experimental data. The research was partially funded by the Technion V.P.R. fund and by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bean</surname> <given-names>B. P.</given-names></name></person-group> (<year>2007</year>). <article-title>The action potential in mammalian central neurons</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>8</volume>, <fpage>451</fpage>&#x02013;<lpage>465</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2148</pub-id><pub-id pub-id-type="pmid">17514198</pub-id></citation> 
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000E9;dard</surname> <given-names>C.</given-names></name> <name><surname>Kr&#x000F6;ger</surname> <given-names>H.</given-names></name> <name><surname>Destexhe</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>Does the 1/f frequency scaling of brain signals reflect self-organized critical states?</article-title> <source>Phys. Rev. Lett</source>. <volume>97</volume>:<fpage>118102</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.97.118102</pub-id><pub-id pub-id-type="pmid">17025932</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bendat</surname> <given-names>J. S.</given-names></name> <name><surname>Piersol</surname> <given-names>A. G.</given-names></name></person-group> (<year>2000</year>). <source>Random Data Analysis and Measurement Procedures</source>, Vol. 11, <edition>3rd edn</edition>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Wiley</publisher-name>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chandler</surname> <given-names>W. K.</given-names></name> <name><surname>Meves</surname> <given-names>H.</given-names></name></person-group> (<year>1970</year>). <article-title>Slow changes in membrane permeability and long-lasting action potentials in axons perfused with fluoride solutions</article-title>. <source>J. Physiol</source>. <volume>211</volume>, <fpage>707</fpage>&#x02013;<lpage>728</lpage>. <pub-id pub-id-type="pmid">5501058</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davidson</surname> <given-names>E.</given-names></name> <name><surname>Levin</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Gene regulatory networks</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>102</volume>:<fpage>4935</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.0502024102</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Col</surname> <given-names>R.</given-names></name> <name><surname>Messlinger</surname> <given-names>K.</given-names></name> <name><surname>Carr</surname> <given-names>R. W.</given-names></name></person-group> (<year>2008</year>). <article-title>Conduction velocity is regulated by sodium channel inactivation in unmyelinated axons innervating the rat cranial meninges</article-title>. <source>J. Physiol</source>. <volume>586</volume>, <fpage>1089</fpage>&#x02013;<lpage>1103</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.2007.145383</pub-id><pub-id pub-id-type="pmid">18096592</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Debanne</surname> <given-names>D.</given-names></name> <name><surname>Campanac</surname> <given-names>E.</given-names></name> <name><surname>Bialowas</surname> <given-names>A.</given-names></name> <name><surname>Carlier</surname> <given-names>E.</given-names></name></person-group> (<year>2011</year>). <article-title>Axon physiology</article-title>. <source>Physiol. Rev</source>. <volume>91</volume>, <fpage>555</fpage>&#x02013;<lpage>602</lpage>. <pub-id pub-id-type="doi">10.1152/physrev.00048.2009</pub-id><pub-id pub-id-type="pmid">21527732</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fleidervish</surname> <given-names>I. A.</given-names></name> <name><surname>Friedman</surname> <given-names>A.</given-names></name> <name><surname>Gutnick</surname> <given-names>M. J.</given-names></name></person-group> (<year>1996</year>). <article-title>Slow inactivation of Na&#x0002B; current and slow cumulative spike adaptation in mouse and guinea-pig neocortical neurones in slices</article-title>. <source>J. Physiol</source>. <volume>493</volume>, <fpage>83</fpage>&#x02013;<lpage>97</lpage>. <pub-id pub-id-type="pmid">8735696</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gal</surname> <given-names>A.</given-names></name> <name><surname>Eytan</surname> <given-names>D.</given-names></name> <name><surname>Wallach</surname> <given-names>A.</given-names></name> <name><surname>Sandler</surname> <given-names>M.</given-names></name> <name><surname>Schiller</surname> <given-names>J.</given-names></name> <name><surname>Marom</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Dynamics of excitability over extended timescales in cultured cortical neurons</article-title>. <source>J. Neurosci</source>. <volume>30</volume>, <fpage>16332</fpage>&#x02013;<lpage>16342</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4859-10.2010</pub-id><pub-id pub-id-type="pmid">21123579</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gal</surname> <given-names>A.</given-names></name> <name><surname>Marom</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Entrainment of the intrinsic dynamics of single isolated neurons by natural-like input</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>7912</fpage>&#x02013;<lpage>7918</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3763-12.2013</pub-id><pub-id pub-id-type="pmid">23637182</pub-id></citation>
</ref>
<ref id="B10a">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gal</surname> <given-names>A.</given-names></name> <name><surname>Marom</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Self-organized criticality in single neuron excitability</article-title>. <source>Phys. Rev. E</source> <volume>88</volume>:<fpage>062717</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.88.062717</pub-id><pub-id pub-id-type="pmid">24483496</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gilden</surname> <given-names>D.</given-names></name> <name><surname>Thornton</surname> <given-names>T.</given-names></name> <name><surname>Mallon</surname> <given-names>M.</given-names></name></person-group> (<year>1995</year>). <article-title>1/f noise in human cognition</article-title>. <source>Science</source> <volume>267</volume>, <fpage>1837</fpage>&#x02013;<lpage>1839</lpage>. <pub-id pub-id-type="doi">10.1126/science.7892611</pub-id><pub-id pub-id-type="pmid">7892611</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gillespie</surname> <given-names>D. T.</given-names></name></person-group> (<year>2000</year>). <article-title>The chemical Langevin equation</article-title>. <source>J. Chem. Phys</source>. <volume>113</volume>, <fpage>297</fpage>. <pub-id pub-id-type="doi">10.1063/1.481811</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grossman</surname> <given-names>B. Y. Y.</given-names></name> <name><surname>Parnas</surname> <given-names>I.</given-names></name> <name><surname>Spira</surname> <given-names>M. E.</given-names></name></person-group> (<year>1979</year>). <article-title>Differential conduction block in branches of a bifurcating axon</article-title>. <source>J. Physiol</source>. <volume>295</volume>, <fpage>307</fpage>&#x02013;<lpage>322</lpage>. <pub-id pub-id-type="pmid">521937</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grubb</surname> <given-names>M. S.</given-names></name> <name><surname>Burrone</surname> <given-names>J.</given-names></name></person-group> (<year>2010</year>). <article-title>Activity-dependent relocation of the axon initial segment fine-tunes neuronal excitability</article-title>. <source>Nature</source> <volume>465</volume>, <fpage>1070</fpage>&#x02013;<lpage>1074</lpage>. <pub-id pub-id-type="doi">10.1038/nature09160</pub-id><pub-id pub-id-type="pmid">20543823</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hille</surname> <given-names>B.</given-names></name></person-group> (<year>2001</year>). <source>Ion Channels of Excitable Membranes, 3rd edn</source>. <publisher-loc>Sunderland, MA</publisher-loc>: <publisher-name>Sinauer Associates</publisher-name>.</citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hodgkin</surname> <given-names>A. L.</given-names></name> <name><surname>Huxley</surname> <given-names>A. F.</given-names></name></person-group> (<year>1952</year>). <article-title>A quantitative description of membrane current and its application to conduction and excitation in nerve</article-title>. <source>J. Physiol</source>. <volume>117</volume>, <fpage>500</fpage>. <pub-id pub-id-type="pmid">12991237</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Keshner</surname> <given-names>M. S.</given-names></name></person-group> (<year>1979</year>). <source>Renewal Process and Diffusion Models of 1/f Noise</source>. Ph.D. thesis, Massachusetts Institute of Technology.</citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keshner</surname> <given-names>M. S.</given-names></name></person-group> (<year>1982</year>). <article-title>1/f noise</article-title>. <source>Proc. IEEE</source> <volume>70</volume>, <fpage>212</fpage>&#x02013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1109/PROC.1982.12282</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levitan</surname> <given-names>I. B.</given-names></name></person-group> (<year>1994</year>). <article-title>Modulation of ion channels by protein phosphorylation and dephosphorylation</article-title>. <source>Annu. Rev. Physiol</source>. <volume>11</volume>, <fpage>193</fpage>&#x02013;<lpage>212</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.ph.56.030194.001205</pub-id><pub-id pub-id-type="pmid">7516643</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lowen</surname> <given-names>S. B.</given-names></name> <name><surname>Liebovitch</surname> <given-names>L. S.</given-names></name> <name><surname>White</surname> <given-names>J. A.</given-names></name></person-group> (<year>1999</year>). <article-title>Fractal ion-channel behavior generates fractal firing patterns in neuronal models</article-title>. <source>Phys. Rev. E</source> <volume>59</volume>, <fpage>5970</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.59.5970</pub-id><pub-id pub-id-type="pmid">11969579</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lowen</surname> <given-names>S. B.</given-names></name> <name><surname>Teich</surname> <given-names>M. C.</given-names></name></person-group> (<year>2005</year>). <source>Fractal-Based Point Processes, 1st edn</source>. <publisher-loc>Hoboken, NJ</publisher-loc>: <publisher-name>Wiley-Interscience</publisher-name>. <pub-id pub-id-type="doi">10.1002/0471754722</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lundstrom</surname> <given-names>B. N.</given-names></name> <name><surname>Fairhall</surname> <given-names>A. L.</given-names></name> <name><surname>Maravall</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Multiple timescale encoding of slowly varying whisker stimulus envelope in cortical and thalamic neurons <italic>in vivo</italic></article-title>. <source>J. Neurosci</source>. <volume>30</volume>, <fpage>5071</fpage>&#x02013;<lpage>5077</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2193-09.2010</pub-id><pub-id pub-id-type="pmid">20371827</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lundstrom</surname> <given-names>B. N.</given-names></name> <name><surname>Higgs</surname> <given-names>M. H.</given-names></name> <name><surname>Spain</surname> <given-names>W. J.</given-names></name> <name><surname>Fairhall</surname> <given-names>A. L.</given-names></name></person-group> (<year>2008</year>). <article-title>Fractional differentiation by neocortical pyramidal neurons</article-title>. <source>Nat. Neurosci</source>. <volume>11</volume>, <fpage>1335</fpage>&#x02013;<lpage>1342</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2212</pub-id><pub-id pub-id-type="pmid">18931665</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marder</surname> <given-names>E.</given-names></name> <name><surname>Goaillard</surname> <given-names>J.-M.</given-names></name></person-group> (<year>2006</year>). <article-title>Variability, compensation and homeostasis in neuron and network function</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>7</volume>, <fpage>563</fpage>&#x02013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1949</pub-id><pub-id pub-id-type="pmid">16791145</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marom</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Neural timescales or lack thereof</article-title>. <source>Prog. Neurobiol</source>. <volume>90</volume>, <fpage>16</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1016/j.pneurobio.2009.10.003</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Musha</surname> <given-names>T.</given-names></name> <name><surname>Yamamoto</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>1/f fluctuations in biological systems</article-title>, in <source>Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. &#x02018;Magnificent Milestones and Emerging Opportunities in Medical Engineering&#x02019; (Cat. No.97CH36136)</source>, Vol. 6 (<publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2692</fpage>&#x02013;<lpage>2697</lpage>. <pub-id pub-id-type="doi">10.1109/IEMBS.1997.756890</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Naundorf</surname> <given-names>B.</given-names></name> <name><surname>Wolf</surname> <given-names>F.</given-names></name> <name><surname>Volgushev</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Unique features of action potential initiation in cortical neurons</article-title>. <source>Nature</source> <volume>440</volume>, <fpage>1060</fpage>&#x02013;<lpage>1063</lpage>. <pub-id pub-id-type="doi">10.1038/nature04610</pub-id><pub-id pub-id-type="pmid">16625198</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Power laws, Pareto distributions and Zipf&#x00027;s law</article-title>. <source>Contemp. Phys</source>. <volume>46</volume>, <fpage>323</fpage>&#x02013;<lpage>351</lpage>. <pub-id pub-id-type="doi">10.1080/00107510500052444</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishiyama</surname> <given-names>H.</given-names></name> <name><surname>Fukaya</surname> <given-names>M.</given-names></name> <name><surname>Watanabe</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Axonal motility and its modulation by activity are branch-type specific in the intact adult cerebellum</article-title>. <source>Neuron</source> <volume>56</volume>, <fpage>472</fpage>&#x02013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2007.09.010</pub-id><pub-id pub-id-type="pmid">17988631</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Oppenheim</surname> <given-names>A.</given-names></name> <name><surname>Willsky</surname> <given-names>A.</given-names></name> <name><surname>Nawab</surname> <given-names>S.</given-names></name></person-group> (<year>1983</year>). <source>Signals and Systems</source>. <publisher-loc>Englewood Cliffs, NJ</publisher-loc>: <publisher-name>Prentice Hall</publisher-name>.</citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Orio</surname> <given-names>P.</given-names></name> <name><surname>Soudry</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Simple, fast and accurate implementation of the diffusion approximation algorithm for stochastic ion channels with multiple states</article-title>. <source>PLoS ONE</source> <volume>7</volume>:<fpage>e36670</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0036670</pub-id><pub-id pub-id-type="pmid">22629320</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padmanabhan</surname> <given-names>K.</given-names></name> <name><surname>Urban</surname> <given-names>N. N.</given-names></name></person-group> (<year>2010</year>). <article-title>Intrinsic biophysical diversity decorrelates neuronal firing while increasing information content</article-title>. <source>Nat. Neurosci</source>. <volume>13</volume>, <fpage>1276</fpage>&#x02013;<lpage>1282</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2630</pub-id><pub-id pub-id-type="pmid">20802489</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paola</surname> <given-names>V. D.</given-names></name> <name><surname>Holtmaat</surname> <given-names>A.</given-names></name> <name><surname>Knott</surname> <given-names>G.</given-names></name> <name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Wilbrecht</surname> <given-names>L.</given-names></name> <name><surname>Caroni</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2006</year>). <article-title>Cell type-specific structural plasticity of axonal branches and boutons in the adult neocortex</article-title>. <source>Neuron</source> <volume>49</volume>, <fpage>861</fpage>&#x02013;<lpage>875</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2006.02.017</pub-id><pub-id pub-id-type="pmid">16543134</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Papoulis</surname> <given-names>A.</given-names></name> <name><surname>Pillai</surname> <given-names>S. U.</given-names></name></person-group> (<year>1965</year>). <source>Probability, Random Variables, and Stochastic Processes</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>McGraw-Hill</publisher-name>.</citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pozzorini</surname> <given-names>C.</given-names></name> <name><surname>Naud</surname> <given-names>R.</given-names></name> <name><surname>Mensi</surname> <given-names>S.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2013</year>). <article-title>Temporal whitening by power-law adaptation in neocortical neurons</article-title>. <source>Nat. Neurosci</source>. <volume>16</volume>, <fpage>942</fpage>&#x02013;<lpage>948</lpage>. <pub-id pub-id-type="doi">10.1038/nn.3431</pub-id><pub-id pub-id-type="pmid">23749146</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Repp</surname> <given-names>B. H.</given-names></name></person-group> (<year>2005</year>). <article-title>Sensorimotor synchronization: a review of the tapping literature</article-title>. <source>Psychon. Bull. Rev</source>. <volume>12</volume>, <fpage>969</fpage>&#x02013;<lpage>992</lpage>. <pub-id pub-id-type="doi">10.3758/BF03206433</pub-id><pub-id pub-id-type="pmid">16615317</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Robinson</surname> <given-names>P. M.</given-names></name></person-group> (<year>2003</year>). <source>Time Series with Long Memory</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarpeshkar</surname> <given-names>R.</given-names></name></person-group> (<year>1998</year>). <article-title>Analog versus digital: extrapolating from electronics to neurobiology</article-title>. <source>Neural Comput</source>. <volume>10</volume>, <fpage>1601</fpage>&#x02013;<lpage>1638</lpage>. <pub-id pub-id-type="doi">10.1162/089976698300017052</pub-id><pub-id pub-id-type="pmid">9744889</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silver</surname> <given-names>I. A.</given-names></name> <name><surname>Deas</surname> <given-names>J.</given-names></name> <name><surname>Erecinska</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Ion homeostasis in brain cells: differences in intracellular ion responses to energy limitation between cultured neurons and glial cells</article-title>. <source>Neuroscience</source> <volume>78</volume>, <fpage>589</fpage>&#x02013;<lpage>601</lpage>. <pub-id pub-id-type="doi">10.1016/S0306-4522(96)00600-8</pub-id> <pub-id pub-id-type="pmid">9145812</pub-id></citation> 
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sj&#x000F6;str&#x000F6;m</surname> <given-names>P. J.</given-names></name> <name><surname>Rancz</surname> <given-names>E. A.</given-names></name> <name><surname>Roth</surname> <given-names>A.</given-names></name> <name><surname>H&#x000E4;usser</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Dendritic excitability and synaptic plasticity</article-title> <source>Physiol. Rev</source>. <volume>88</volume>, <fpage>769</fpage>&#x02013;<lpage>840</lpage>. <pub-id pub-id-type="doi">10.1152/physrev.00016.2007</pub-id><pub-id pub-id-type="pmid">18391179</pub-id></citation> 
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soen</surname> <given-names>Y.</given-names></name> <name><surname>Braun</surname> <given-names>E.</given-names></name></person-group> (<year>2000</year>). <article-title>Scale-invariant fluctuations at different levels of organization in developing heart cell networks</article-title>. <source>Phys. Rev. E</source> <volume>61</volume>, <fpage>R2216</fpage>&#x02013;<lpage>R2219</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.61.R2216</pub-id><pub-id pub-id-type="pmid">11046684</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soudry</surname> <given-names>D.</given-names></name> <name><surname>Meir</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>The neuronal response at extended timescales: a linearized spiking input-output relation</article-title>. <source>Front. Comput. Neurosci</source>. <volume>8</volume>:<issue>35</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2014.00035</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soudry</surname> <given-names>D.</given-names></name> <name><surname>Meir</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>History-dependent dynamics in a generic model of ion channels - an analytic study</article-title>. <source>Front. Comput. Neurosci</source>. <volume>4</volume>:<fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.3389/fncom.2010.00003</pub-id><pub-id pub-id-type="pmid">20725633</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soudry</surname> <given-names>D.</given-names></name> <name><surname>Meir</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>Conductance-based neuron models and the slow dynamics of excitability</article-title>. <source>Front. Comput. Neurosci</source>. <volume>6</volume>:<issue>4</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2012.00004</pub-id><pub-id pub-id-type="pmid">22355288</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Staub</surname> <given-names>O.</given-names></name> <name><surname>Gautschi</surname> <given-names>I.</given-names></name> <name><surname>Ishikawa</surname> <given-names>T.</given-names></name> <name><surname>Breitschopf</surname> <given-names>K.</given-names></name> <name><surname>Ciechanover</surname> <given-names>A.</given-names></name> <name><surname>Schild</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Regulation of stability and function of the epithelial Na&#x0002B; channel (ENaC) by ubiquitination</article-title>. <source>EMBO J</source>. <volume>16</volume>, <fpage>6325</fpage>&#x02013;<lpage>6336</lpage>. <pub-id pub-id-type="doi">10.1093/emboj/16.21.6325</pub-id><pub-id pub-id-type="pmid">9351815</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tessone</surname> <given-names>C.</given-names></name> <name><surname>Mirasso</surname> <given-names>C.</given-names></name> <name><surname>Toral</surname> <given-names>R.</given-names></name> <name><surname>Gunton</surname> <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>Diversity-induced resonance</article-title>. <source>Phys. Rev. Lett</source>. <volume>97</volume>, <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.97.194101</pub-id><pub-id pub-id-type="pmid">17155633</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ward</surname> <given-names>L.</given-names></name> <name><surname>Greenwood</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>1/f noise</article-title>. <source>Scholarpedia</source> <volume>2</volume>:<fpage>1537</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.1537</pub-id></citation>
</ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>I.e., if &#x02200;<italic>t</italic> : <italic>I</italic>(<italic>t</italic>) &#x0003D; 0, then the probability that a neuron will fire is negligible &#x02013; on any relevant finite time interval (e.g., minutes or days).</p></fn>
<fn id="fn0002"><p><sup>2</sup>A semi-analytic derivation is an analytic derivation in which some terms are obtained by relatively simple numerics.</p></fn>
<fn id="fn0003"><p><sup>3</sup>For example, this can happen if the kinetic rates all have low voltage threshold, resulting in <bold>A</bold><sub>&#x0002B;</sub> &#x02248; <bold>A</bold>_ and <bold>b</bold><sub>&#x0002B;</sub> &#x02248; <bold>b</bold>_.</p></fn>
<fn id="fn0004"><p><sup>4</sup>I.e., in all neurons for which 0 &#x0003C; <italic>p</italic><sub>&#x0002A;</sub> &#x0003C; 1.</p></fn>
<fn id="fn0005"><p><sup>5</sup>Otherwise, 0.25 &#x02265; <italic>p</italic><sub>&#x0002A;</sub> &#x02212; <italic>p</italic><sub>&#x0002A;</sub><sup>2</sup> &#x0003D; &#x02329; <italic>&#x00176;</italic><sub><italic>m</italic></sub><sup>2</sup> &#x0232A; &#x02265; 2 <inline-formula><mml:math id="M60"><mml:mrow><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>max</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msubsup><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>Y</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mstyle></mml:mrow></mml:math></inline-formula> (<italic>f</italic>)<italic>df</italic> &#x0003D; &#x0221E;, which is a contradiction.</p></fn>
<fn id="fn0006"><p><sup>6</sup>Also simulations take a long time, since experiments, as in Gal et al. (<xref ref-type="bibr" rid="B9">2010</xref>), are days-long.</p></fn>
<fn id="fn0007"><p><sup>7</sup>E.g., in Equation (16) many different parameters would give the same <italic>c</italic><sub><italic>k</italic></sub>.</p></fn>
<fn id="fn0008"><p><sup>8</sup>Near the edges, <bold>w</bold> &#x02192; 0 (Equation 80 in Soudry and Meir, <xref ref-type="bibr" rid="B43">2014</xref>), and so &#x003BA; (<italic>f</italic>) &#x02192; 1.</p></fn>
<fn id="fn0009"><p><sup>9</sup>I.e., Equations (4&#x02013;6), with the same assumptions as we had in section 2.3.1.</p></fn>
</fn-group>
</back>
</article>
