<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Synaptic Neurosci.</journal-id>
<journal-title>Frontiers in Synaptic Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Synaptic Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1663-3563</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnsyn.2014.00026</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The interplay of plasticity and adaptation in neural circuits: a generative model</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Bernacchia</surname> <given-names>Alberto</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/131693"/>
</contrib>
</contrib-group>
<aff><institution>School of Engineering and Science, Jacobs University Bremen</institution> <country>Bremen, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Christian Tetzlaff, Georg-August University, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Martin Heine, Leibniz Institute for Neurobiology, Germany; David Barrett, Cambridge University, UK</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Alberto Bernacchia, School of Engineering and Science, Jacobs University Bremen, Campus Ring 1, Bremen 28759, Germany e-mail: <email>a.bernacchia&#x00040;jacobs-university.de</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Synaptic Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>30</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>6</volume>
<elocation-id>26</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>05</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>10</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Bernacchia.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.</p></abstract>
<kwd-group>
<kwd>synaptic plasticity</kwd>
<kwd>sensory adaptation</kwd>
<kwd>dynamical systems</kwd>
<kwd>attractor model</kwd>
<kwd>generative model</kwd>
<kwd>Bayesian inference</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="0"/>
<equation-count count="41"/>
<ref-count count="89"/>
<page-count count="14"/>
<word-count count="9421"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>1. Introduction</title>
<p>The main goal of Computational Neuroscience is to uncover the kinds of computation implemented by neurons and neural circuits, and to identify the biological mechanisms underlying these computations. Numerous types of computation have been described and have been associated with the dynamics of different neural and synaptic processes (Herz et al., <xref ref-type="bibr" rid="B36">2006</xref>; Abbott, <xref ref-type="bibr" rid="B1">2008</xref>; Gerstner et al., <xref ref-type="bibr" rid="B33">2012</xref>; Tetzlaff et al., <xref ref-type="bibr" rid="B79">2012</xref>). Among the numerous biological phenomena observed in the brain, sensory adaptation and synaptic plasticity stand out as two of the most studied, since they are observed ubiquitously across most brain regions and animal species. Both phenomena give rise to specific types of computation, but the functional implications of their interaction remain unclear.</p>
<p>Synaptic plasticity is the change in strength of the interaction between neurons, and is believed to control the change in behavior of a subject following its experience of the external world. Synaptic plasticity takes multiple forms (Abbott and Nelson, <xref ref-type="bibr" rid="B2">2000</xref>; Feldman, <xref ref-type="bibr" rid="B27">2009</xref>), of which the most studied is Hebbian plasticity (Bi and Poo, <xref ref-type="bibr" rid="B10">2001</xref>; Caporale and Dan, <xref ref-type="bibr" rid="B17">2008</xref>). Different types of plasticity are believed to underlie a broad range of functions, including: memory formation and storage (Martin et al., <xref ref-type="bibr" rid="B54">2000</xref>; Lamprecht and LeDoux, <xref ref-type="bibr" rid="B50">2004</xref>; Seung, <xref ref-type="bibr" rid="B72">2009</xref>), nervous system development (Katz and Shatz, <xref ref-type="bibr" rid="B42">1996</xref>; Miller, <xref ref-type="bibr" rid="B56">1996</xref>; Sanes and Lichtman, <xref ref-type="bibr" rid="B67">1999</xref>; Song and Abbott, <xref ref-type="bibr" rid="B76">2001</xref>), recovery after brain injury (Buonomano and Merzenich, <xref ref-type="bibr" rid="B15">1998</xref>; Feldman and Brecht, <xref ref-type="bibr" rid="B28">2005</xref>), classical conditioning (Wickens et al., <xref ref-type="bibr" rid="B87">2003</xref>; Calabresi et al., <xref ref-type="bibr" rid="B16">2007</xref>; Surmeier et al., <xref ref-type="bibr" rid="B78">2009</xref>; Pawlak et al., <xref ref-type="bibr" rid="B61">2010</xref>; Gallistel and Matzel, <xref ref-type="bibr" rid="B31">2013</xref>), operant conditioning (Seung, <xref ref-type="bibr" rid="B71">2003</xref>; Montague et al., <xref ref-type="bibr" rid="B57">2004</xref>; Daw and Doya, <xref ref-type="bibr" rid="B21">2006</xref>; Doya, <xref ref-type="bibr" rid="B23">2007</xref>; Soltani and Wang, <xref ref-type="bibr" rid="B75">2008</xref>), spatial navigation (Blum and Abbott, <xref ref-type="bibr" rid="B11">1996</xref>; Mehta et al., <xref ref-type="bibr" rid="B55">2002</xref>), efficient coding of sensory stimuli (Toyoizumi et al., <xref ref-type="bibr" rid="B80">2005</xref>; Savin et al., <xref ref-type="bibr" rid="B68">2010</xref>; Bernacchia and Wang, <xref ref-type="bibr" rid="B8">2013</xref>), homeostatic regulation of neuronal excitability (Royer and Par&#x000E9;, <xref ref-type="bibr" rid="B65">2003</xref>; Turrigiano and Nelson, <xref ref-type="bibr" rid="B82">2004</xref>; Williams et al., <xref ref-type="bibr" rid="B88">2013</xref>), sound localization (Gerstner et al., <xref ref-type="bibr" rid="B32">1996</xref>) and production of behavioral sequences (Fiete et al., <xref ref-type="bibr" rid="B29">2010</xref>).</p>
<p>Sensory adaptation is the change in responsiveness of a neuron to a given input, and is believed to control the change in perception of a stimulus, even if the stimulus maintains constant physical attributes (Webster, <xref ref-type="bibr" rid="B86">2011</xref>). The main effect of adaptation on neural activity is to shift the response function depending on the adapting stimulus (Kohn, <xref ref-type="bibr" rid="B45">2007</xref>; Rieke and Rudd, <xref ref-type="bibr" rid="B64">2009</xref>). This effect has been observed in a broad range of species, sensory modalities and stimulus variables, including: luminance (Sakmann and Creutzfeldt, <xref ref-type="bibr" rid="B66">1969</xref>; Shapley and Enroth-Cugell, <xref ref-type="bibr" rid="B73">1984</xref>), contrast (Ohzawa et al., <xref ref-type="bibr" rid="B60">1985</xref>; Smirnakis et al., <xref ref-type="bibr" rid="B74">1997</xref>), edge orientation (M&#x000FC;ller et al., <xref ref-type="bibr" rid="B58">1999</xref>; Dragoi et al., <xref ref-type="bibr" rid="B24">2000</xref>), direction of motion (Kohn and Movshon, <xref ref-type="bibr" rid="B46">2004</xref>), motion speed (Brenner et al., <xref ref-type="bibr" rid="B14">2000</xref>; Krekelberg et al., <xref ref-type="bibr" rid="B49">2006</xref>) and sound level (Dean et al., <xref ref-type="bibr" rid="B22">2005</xref>). In addition to the shift in tuning, the gain of neural response changes depending on the stimulus variance (Fairhall et al., <xref ref-type="bibr" rid="B26">2001</xref>; Borst et al., <xref ref-type="bibr" rid="B13">2005</xref>; Nagel and Doupe, <xref ref-type="bibr" rid="B59">2006</xref>; Maravall et al., <xref ref-type="bibr" rid="B53">2007</xref>). One hypothesized function of sensory adaptation is the efficiency of coding: the statistics of input stimuli can vary widely and must be encoded by neurons with limited dynamic range; centering the neural response around the mean input prevents saturation and determines optimal discrimination (Laughlin, <xref ref-type="bibr" rid="B51">1981</xref>; Wainwright, <xref ref-type="bibr" rid="B84">1999</xref>; Machens et al., <xref ref-type="bibr" rid="B52">2005</xref>; Clifford et al., <xref ref-type="bibr" rid="B20">2007</xref>; Schwartz et al., <xref ref-type="bibr" rid="B69">2007</xref>; Wark et al., <xref ref-type="bibr" rid="B85">2007</xref>).</p>
<p>I simulate and analyze a computational model of a recurrent neural circuit, and I show that when both sensory adaptation and synaptic plasticity are included in the model, the neural circuit is endowed with a specific type of computation: it becomes a generative model of the input stimuli. Generative models provide a solution to a broad range of problems in machine learning (Hinton, <xref ref-type="bibr" rid="B37">2007</xref>, <xref ref-type="bibr" rid="B38">2010</xref>; Barra et al., <xref ref-type="bibr" rid="B5">2012</xref>), and have been proposed as candidate models of perception, learning and Bayesian inference in real brains (Fiser et al., <xref ref-type="bibr" rid="B30">2010</xref>; Clark, <xref ref-type="bibr" rid="B19">2013</xref>). In the model presented here, the spontaneous dynamics of neural activity lingers on a subset of specific neural patterns which correspond to the neural patterns that have been driven by sensory stimuli. In particular, the likelihood of observing a given neural pattern is equal to the frequency with which the corresponding stimulus has been previously experienced. Formally speaking, the model dynamics displays a large number of attractors which sample exactly the probability distribution of input stimuli. In the limit of an infinite number of neurons, I show that the dynamics converges to a continuous (line) attractor.</p>
<p>Neurons and synapses are modeled as binary variables (Hopfield, <xref ref-type="bibr" rid="B39">1982</xref>; Tsodyks, <xref ref-type="bibr" rid="B81">1990</xref>), therefore the model is not biologically realistic. In particular, it does not include separate populations of excitatory and inhibitory neurons and does not account for a range of dynamical regimes observed in the brain, such as the asynchronous and irregular spiking activity of cortical neurons. Also, the network operates in two distinct phases: (1) a stimulus driven regime in which plasticity and adaptation occur and internal dynamics is turned off, and (2) a spontaneous regime in which the stable states of the dynamics are probed in absence of stimulus, plasticity and adaptation. However, the model is very simple to simulate and analyze despite the inclusion of multiple mechanisms. The present work is limited to univariate distributions of input stimuli.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<p>The neural circuit implemented in this work is a variant of a model studied in Bernacchia and Amit (<xref ref-type="bibr" rid="B7">2007</xref>), with the additional inclusion of adaptation. I consider a neural circuit with a total number of neurons equal to <italic>N</italic>, labeled by the index <italic>i</italic> &#x0003D; 1, &#x02026;, <italic>N</italic>. The total current afferent to neuron <italic>i</italic> at time <italic>t</italic> is the sum of the external current, due to the input stimulus, and the internal current, due to the local recurrent connections within the neural circuit:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The activity of neuron <italic>i</italic> upon receiving current <italic>I<sub>i</sub></italic> is equal to</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext>sign</mml:mtext><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>where &#x00394;<italic>t</italic> is the time step used in simulations.</p>
<p>For simplicity, I consider two exclusive scenarios, in which either of the two types of currents dominate: When a stimulus is presented, the internal current is set to zero, therefore the external current dominates (&#x0201C;stimulus-driven&#x0201D; stage); when a stimulus is absent, the external current is set to zero and the internal current dominates (&#x0201C;spontaneous&#x0201D; stage). The case in which both types of current are simultaneously contributing was studied in Bernacchia and Amit (<xref ref-type="bibr" rid="B7">2007</xref>) (in absence of adaptation). The parameter &#x00394;<italic>t</italic> reflects the time constant of the update of a neuron&#x00027;s activity, and is set to 10 ms during the spontaneous stage. During the stimulus-driven stage, a sequence of stimuli is presented, the activity is instantaneously enforced by the stimulus. Therefore, the activity is constant as long as the stimulus is constant, and changes immediately following transitions between subsequent stimuli. For simplicity, I simulate one time step for each stimulus, by setting the time step equal to one &#x0201C;trial,&#x0201D; &#x00394;<italic>t</italic> &#x0003D; 1. This value of &#x00394;<italic>t</italic> is used only for convenience of numerical integration, and is not related to any biological timescale. A total of <italic>T</italic> number of trials is simulated (a sequence of <italic>T</italic> stimuli) in one simulation, <italic>t</italic> &#x0003D; 1, &#x02026;, <italic>T</italic>.</p>
<p>The stimulus identity is labeled by &#x003B1;, varying in the closed interval of real numbers &#x003B1; &#x02208; (0,1), and the stimulus presented at time <italic>t</italic> is denoted as &#x003B1;(<italic>t</italic>). The stimulus value &#x003B1;(<italic>t</italic>) at each time step is drawn at random from a probability distribution <italic>P</italic>(&#x003B1;). The external current afferent to neuron <italic>i</italic> depends on how that neuron is tuned to the stimulus, which is summarized by its &#x0201C;tuning curve.&#x0201D; I consider two types of tuning curves in different simulations, one monotonic (sigmoidal), and one periodic (sine), given by the following simple formulas</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>tanh</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>]</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>]</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>An illustration of the tuning curves is presented in Figure <xref ref-type="fig" rid="F1">1</xref>. I define &#x003BC;<sub><italic>i</italic></sub> as the &#x0201C;tuning offset&#x0201D;: different neurons have different offsets, but the same shape of the tuning curve. The results of most simulation are shown for the sigmoidal tuning curve (3), but very similar results have been obtained for a periodic tuning curve (4) (see Appendix). The parameter &#x003B2; is positive, and its specific value is irrelevant, since the neuron output is binary, given by Equation (2), and the internal current is zero during the stimulus. In a given simulation, the probability distribution <italic>P</italic>(&#x003B1;) is taken from a parametric family (see e.g., <bold>Figure 3</bold>), and different parameters are drawn at random in different simulations (the distribution equals the square of a Fourier series with random coefficients truncated at five terms).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Schematic illustration of the neural circuit model with its tuning curves and recurrent connections</bold>. Each circle represents one neuron and each arrow a synaptic connection. Each rectangle shows a tuning curve for one neuron, namely the external current afferent to that neuron plotted as a function of the stimulus value. <bold>Left</bold>: sigmoidal tuning curves. <bold>Right</bold>: periodic tuning curves.</p></caption>
<graphic xlink:href="fnsyn-06-00026-g0001.tif"/>
</fig>
<p>The internal current is the sum of activity <italic>x<sub>j</sub></italic> of pre-synaptic neurons weighted by the synaptic matrix <italic>J<sub>ij</sub></italic>, namely</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>j</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Synaptic weights have binary values, <italic>J<sub>ij</sub></italic> &#x0003D; &#x000B1;1, except for the self-couplings that are set to zero <italic>J<sub>ii</sub></italic> &#x0003D; 0. Synaptic strengths are initialized at random, &#x0002B;1 and &#x02212;1 with equal probability. The synaptic plasticity rule is Hebbian, meaning that it follows the correlation of the pre and post-synaptic neuron (the product <italic>x<sub>i</sub>x<sub>j</sub></italic>). Synaptic weights are updated at random at each time step according to the following transition probabilities. The probability of potentiation from time <italic>t</italic> to time <italic>t</italic> &#x0002B; &#x00394;<italic>t</italic> is the probability of the transition from <italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x02212;1 to <italic>J<sub>ij</sub></italic>(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0003D; &#x0002B;1, and is defined as</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>Conversely, the probability of depression from time <italic>t</italic> to time <italic>t</italic> &#x0002B; &#x00394;<italic>t</italic> is the probability of the transition from <italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x0002B;1 to <italic>J<sub>ij</sub></italic>(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0003D; &#x02212;1, and is defined as</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M7"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>Therefore, if <italic>x<sub>i</sub></italic> and <italic>x<sub>j</sub></italic> are different, there is a probability 1/&#x003C4; of synaptic depression, while if they are equal there is a probability 1/&#x003C4; of synaptic potentiation. The time constant &#x003C4; represents the average number of time steps necessary to observe a transition (in units of &#x00394;<italic>t</italic>). Note that this synaptic plasticity rule is symmetric, namely the same transition probabilities apply to <italic>J<sub>ij</sub></italic> and <italic>J<sub>ji</sub></italic>. In order to reduce the effect of finite-size noise, symmetry of synaptic weights is enforced at each transition by updating half of the synapses and setting <italic>J<sub>ij</sub></italic> &#x0003D; <italic>J<sub>ji</sub></italic>. This enforcement does not change the qualitative behavior of the model.</p>
<p>The tuning curve of a neuron is modified during presentation of stimuli, as a consequence of adaptation. We implement a phenomenon known as adaptation to the mean, which is ubiquitously observed in a wide range of species, sensory modalities and stimulus variables (Wark et al., <xref ref-type="bibr" rid="B85">2007</xref>; Rieke and Rudd, <xref ref-type="bibr" rid="B64">2009</xref>). In particular, the presentation of a given stimulus determines a change in the tuning offsets of neurons such that they tend to converge toward that stimulus. An illustration of this dynamics is shown in Figure <xref ref-type="fig" rid="F2">2</xref>. The tuning offset is a function time, &#x003BC;<sub><italic>i</italic></sub>(<italic>t</italic>), and changes according to</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M8"><mml:mrow><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup><mml:mo>&#x02212;</mml:mo><mml:mi>&#x00398;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>where &#x003C4; is the timescale of adaptation in units of &#x00394;<italic>t</italic> (&#x003C4; is chosen equal to the timescale of plasticity), &#x003B1;(<italic>t</italic>) is the stimulus presented at time <italic>t</italic>, and &#x00398; is the step function. The initial values of the tuning offsets &#x003BC;<sub><italic>i</italic></sub>(0) &#x0003D; &#x003BC;<sup>0</sup><sub><italic>i</italic></sub> are chosen to span uniformly the interval (0,1), namely</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M9"><mml:mrow><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:mfrac><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow></mml:math></disp-formula>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>The adaptation mechanism, by which tuning curves of neurons are modified according to the presented stimulus</bold>. Tuning curves of two neurons are shown, one neuron in blue and the other one in red, before and after adaptation (full and dashed line, respectively). The presented stimulus is indicated by the black dot and the vertical black line. The tuning offsets of the two neurons are shown by the blue and red dots. Tuning offsets are attracted by the stimulus, as shown by the arrows. <bold>Left</bold>: sigmoidal tuning curves. <bold>Right</bold>: periodic tuning curves.</p></caption>
<graphic xlink:href="fnsyn-06-00026-g0002.tif"/>
</fig>
<p>The dynamics of tuning offsets, Equation (8), implies that they are attracted by the presented stimulus, more so if they are closer to it (see Figure <xref ref-type="fig" rid="F2">2</xref>). For convenience, tuning offsets are reordered at each time step such that &#x003BC;<sub>1</sub> &#x0003C; &#x02026; &#x0003C; &#x003BC;<sub><italic>N</italic></sub>. Namely, after each update by Equation (8), if &#x003BC;<sub><italic>i</italic></sub> &#x0003C; &#x003BC;<italic><sub>j</sub></italic> for <italic>i</italic> &#x0003E; <italic>j</italic> then &#x003BC;<sub><italic>i</italic></sub> &#x02192; &#x003BC;<sub><italic>j</italic></sub> and &#x003BC;<sub><italic>j</italic></sub> &#x02192; &#x003BC;<italic><sub>i</sub></italic> (no other neural parameters are permuted in this step). Learning works without the permutation step but simplifies its implementation.</p>
<p>The spontaneous dynamics of the network is tested 10 times in each simulation, at fixed intervals of T/10 trials, referred to as 10 sessions. In each session of the spontaneous stage, external current, plasticity and adaptation are turned off, and the internal current is turned on. Recurrent neural dynamics is simulated with a time step &#x00394;<italic>t</italic> that is different from the one used during the stimulus-driven stage. In the stimulus driven stage, &#x00394;<italic>t</italic> reflects the presentation of a stimulus, which may occur on a time interval of a few seconds, while &#x00394;<italic>t</italic> in the spontaneous stage reflects the fast interaction between neurons through the internal currents, of the order of tens of milliseconds. This internal dynamics is implemented, by running Equations (2), (5), until the network reaches a stable fixed point, when neural activity does not change from one time step to the next. Then, this stable state is recorded and the stimulus driven dynamics is resumed until the next session.</p>
<p>Across the 10 sessions of one simulation, the stable state may change as a consequence of the changes in synapses occurred during the stimulus-driven stage. I show in the Appendix that the stable state must have a specific form that depends on a single parameter &#x003BD; (see Appendix: &#x0201C;The spontaneous dynamics of neurons&#x0201D;). In general, this function is denoted by &#x003BE;(&#x003BD;, &#x003BC;<sub><italic>i</italic></sub>) and, for a sigmoidal tuning curve, that is equal to</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M10"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where &#x003BD; is defined as the &#x0201C;retrieved&#x0201D; pattern and is sufficient to identify the entire network state. This form implies that the spontaneous state is equal to a pattern that would be obtained in presence of stimulus &#x003B1; &#x0003D; &#x003BD;. In order to explore the possibility of the existence of multiple stable states, I run several simulations each one with a different initial condition, varying across the possible values of retrieved patterns.</p>
<p>The model has three parameters, <italic>N</italic>, <italic>T</italic>, and &#x003C4;, fixed in each simulation. I used <italic>N</italic> &#x0003D; 1000 in most simulations, with a few simulations implementing <italic>N</italic> &#x0003D; 2000, 4000, 8000, and 16000. Three values of <italic>T</italic> used in simulations are <italic>T</italic> &#x0003D; 1000, <italic>T</italic> &#x0003D; 10000, and <italic>T</italic> &#x0003D; 100000. Three values of &#x003C4; used in simulations are &#x003C4; &#x0003D; 100, &#x003C4; &#x0003D; 1000, and &#x003C4; &#x0003D; 10000.</p>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<p>A simulation of neural circuit dynamics is divided in two separate stages: a stimulus-driven stage and a spontaneous stage. During the stimulus-driven stage, sensory stimuli are presented and the external currents dominates over the internal currents. The response of a neuron to external stimuli is characterized by the tuning curve of that neuron (illustrated in Figure <xref ref-type="fig" rid="F1">1</xref>). During the spontaneous stage, there is no sensory stimulus and the neural circuit activates autonomously according to its internal currents. Synaptic plasticity and sensory adaptation occur during the stimulus-driven stage. Synaptic plasticity is implemented by a simple Hebbian rule, while sensory adaptation is implemented by modifying the tuning curve of neurons (illustrated in Figure <xref ref-type="fig" rid="F2">2</xref>). In each simulation, 10 sessions of stimulus-driven dynamics alternate with 10 sessions of spontaneous dynamics (see Materials and Methods for details).</p>
<p>Synaptic strengths are initialized at random, and they start switching as a result of plasticity, depending on the neural patterns of activity enforced by the presentation of stimuli. Upon presentation of a given stimulus, synaptic plasticity tends to make the corresponding neural activity pattern more stable, because of the Hebbian rule. A series of different stimuli are subsequently presented, determining a series of corresponding neural activity patterns. Therefore, because of the presentation of multiple stimuli, each pattern competes with other patterns for switching synapses in its own favor.</p>
<p>However, neural activity patterns not only compete but they also cooperate. Since the tuning curves of neurons are smooth functions of the stimulus (Figure <xref ref-type="fig" rid="F1">1</xref>), two similar values of the stimulus corresponds to two neural activity patterns that are also similar. Therefore, neural activity patterns are correlated, and two similar patterns collaborate in switching synapses toward stabilizing both of them. The resolution of this competition-cooperation trade-off depends on the distribution of input stimuli. If a subset of nearby stimuli is presented more often than other stimuli, the corresponding neural activity patterns will stabilize at the expense of others.</p>
<p>In order to determine which neural activity patterns stabilize as a consequence of synaptic plasticity, I measure the stable fixed points of the spontaneous dynamics, also referred to as &#x0201C;attractors&#x0201D; or &#x0201C;stable states.&#x0201D; At ten regular intervals (sessions) in each simulation, the stream of external stimuli is interrupted and the spontaneous dynamics is tested in presence of the internal currents only. This dynamics runs until the network reaches a stable fixed point, a neural activity pattern that does not change unless the system is perturbed. In each session, the spontaneous dynamics runs multiple times, with different initial conditions, to test for multiple stable states. After recording all the stable states, the stimulus-driven dynamics is resumed until the next session. The process is repeated for the 10 sessions of each simulation (see Materials and Methods).</p>
<p>The neural activity pattern corresponding to a given stable state is summarized by a single parameter, the &#x0201C;retrieved pattern.&#x0201D; This corresponds to the stimulus that, when presented, elicits exactly that neural pattern of activity. During the spontaneous dynamics there is no presentation of any stimulus, nevertheless the stable state is equivalent to the pattern elicited by that stimulus. The fact that the spontaneous dynamics reproduces the activity corresponding to a stimulus implies that synaptic plasticity has previously worked toward stabilizing that stimulus. I show in the Appendix (section: &#x0201C;The spontaneous dynamics of neurons&#x0201D;) that a spontaneous state is equal to a stimulus pattern provided that stimulus-driven synaptic plasticity has occurred for a time long enough.</p>
<p>Figure <xref ref-type="fig" rid="F3">3</xref> shows the stable states (retrieved patterns), recorded during 10 subsequent sessions of spontaneous dynamics (ten rows, from top to bottom), plotted together with the distribution of input stimuli (top curve), in four different simulations (four panels). In each simulation, a different probability density <italic>p</italic>(&#x003B1;) is used to draw the sequence of input stimuli. Stimuli that are located near the highest mode of the distribution are more likely to be presented. Therefore, the corresponding neural activity patterns occur more often, and drive synaptic plasticity toward their own stabilization. As a consequence, in the first session one attractor state appears near the highest mode of the distribution (top row). In one simulation, two attractors appear near the highest mode. In another simulation, two attractor states appear, one near the highest mode, and another one near the second highest.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Distribution of stimuli <italic>p</italic>(&#x003B1;) (probability density, black curve) from which the sequence of input stimuli is drawn, and the stable fixed points of the spontaneous dynamics (attractors, blue stars), referred to as &#x0201C;retrieved patterns.&#x0201D;</bold> Four example simulations are shown in the four panels. Each stable fixed point is denoted by a star along the stimulus space. Different rows in each panel correspond to the 10 sessions of that simulation, ordered from top to bottom. In early sessions, a few attractors tend to locate near the modes of the probability density. In late sessions, several attractors sample the entire space of stimuli in proportion to the their likelihood. A histogram of the attractors from 1000 sessions (blue bars) is supermposed to the probability density of stimuli.</p></caption>
<graphic xlink:href="fnsyn-06-00026-g0003.tif"/>
</fig>
<p>The landscape of attractor states changes significantly in ten subsequent sessions (Figure <xref ref-type="fig" rid="F3">3</xref>, ten rows, from top to bottom). First of all, attractor states are not maintained. If a given state is an attractor in one session, it is not necessarily an attractor in the next session. This is a consequence of the ongoing synaptic plasticity and the ongoing presentation of stimuli. Both processes are noisy: synaptic transitions are stochastic, and stimuli are drawn at random from the given probability density <italic>p</italic>(&#x003B1;). Most importantly, the number of attractor states increases significantly in subsequent sessions. In each simulation, the first session has only one or at most two attractor states. Numerous attractor states appear in subsequent sessions, which seems to sample precisely the distribution of input stimuli.</p>
<p>In summary, spontaneous activity of the neural circuit shows a large number of stable states which samples exactly the distribution of input stimuli. Therefore, spontaneous activity tends to linger on neural activity patterns that corresponds to specific input stimuli, more so if those stimuli have been experienced more often. Formally, spontaneous neural activity stops at a stable state and stays there indefinitely. However, in presence of noise, spontaneous activity would jump between attractor states (Amit, <xref ref-type="bibr" rid="B3">1992</xref>), and would spend more time where a larger number of attractor states are present. In addition, I show below that an infinite number of stable fixed points (a continuous attractor) develops in the limit of an infinite number of neurons, implying that spontaneous activity is virtually free to sample the distribution. This property makes the neural circuit akin to a generative model of the stimuli (see Discussion).</p>
<p>The increase in the number of attractor states is a consequence of adaptation, as illustrated in Figure <xref ref-type="fig" rid="F4">4</xref>. Initially, synapses tend to favor neural activity patterns of stimuli that are encountered more often. However, adaptation tends to counterbalance this effect. In order to illustrate this, I associate each stimulus with a specific region of the neural circuit: when a stimulus is equal to the tuning offset of a neuron, I associate the stimulus with the location of that neuron. In Figure <xref ref-type="fig" rid="F4">4</xref> (left), most stimuli are presented (gray shading) in the top left part of the network (therefore, most stimuli are equal to the tuning offsets of neurons in that part). As a consequence of adaptation, tuning offsets of neurons also tend to concentrate in that part of the network. This is illustrated by the external arrows that, representing fixed shapes of tuning curves, are &#x0201C;repelled&#x0201D; by those stimuli (right). The new organization of neurons following this transformation implies that the distribution of input stimuli now looks uniform (gray shading). I show in the Appendix that the distribution of tuning offsets of neurons matches exactly the distribution of presented stimuli (see Appendix: &#x0201C;The dynamics of tuning offsets&#x0201D;).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Effect of adaptation on the representation of stimuli</bold>. <bold>Left</bold>: illustration of the neural circuit model, same as in Figure <xref ref-type="fig" rid="F1">1</xref>. The tuning curves of different neurons are not shown here, but are still represented by the arrows pointing from the external circle to the neural circuit. The gray shading illustrates the distribution of external stimuli according to the network selectivity: the bump on the top left of the figure implies that most stimuli are presented in that region. By definition, a stimulus presented at a given place of the neural circuit is intended as equal to the tuning offset of the corresponding neuron. <bold>Right</bold>: after adaptation, the tuning curves of neurons are changed, as shown by the displacement of the arrows from the bulk of the stimulus distribution. As a consequence, the stimulus distribution in gray shading now appears uniform across the network (uniform gray shading around the circle).</p></caption>
<graphic xlink:href="fnsyn-06-00026-g0004.tif"/>
</fig>
<p>Therefore, the network effectively &#x0201C;sees&#x0201D; a uniform distribution of presented stimuli. When synaptic plasticity applies to this uniform distribution, no specific stimulus pattern is favored with respect to any other. Therefore, the distribution of synaptic strengths do not favor any specific stimulus, and all patterns are equally likely to represent an attractor state. The increase in the number of attractors across sessions reflects the fact that synaptic plasticity tends to make more and more patterns suitable for stability. However, due to the finite size of the network, the stochastic synaptic transitions and the random presentation of stimuli, some neural patterns are still more likely than others to stabilize. Note that attractors distribute uniformly in the neural space, but since the neural representation of stimuli has changed, via the change in tuning curves, the attractors follows the distribution of stimuli in the stimulus space (see Discussion) as shown in Figure <xref ref-type="fig" rid="F3">3</xref>.</p>
<p>Figure <xref ref-type="fig" rid="F5">5</xref> (left) shows the number of attractor states as a function of session for three different values of the timescale &#x003C4; of plasticity and adaptation (both phenomena are assumed to evolve according to the same time constant &#x003C4;). As described above, the number of attractor states increases in subsequent sessions. In addition, the number of attractor states also increases as a function of the timescale &#x003C4;. A larger timescale implies a smaller effect of noise, because changes in synaptic strengths and tuning curves are slow enough to encompass a large number of stimulus presentations and average out the resulting fluctuations.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Number of attractor states as a function of session number, timescale &#x003C4; (left), and number of neurons (right)</bold>. The number of attractor states increases in subsequent sessions and for slower timescales. The scaling of the number of attractors with respect to the number of neurons is &#x0007E;<italic>N</italic><sup>2/3</sup>.</p></caption>
<graphic xlink:href="fnsyn-06-00026-g0005.tif"/>
</fig>
<p>In order for the spontaneous activity to match exactly the distribution of input stimuli, the landscape of attractor states should converge to an infinite number of fixed points (a continuous attractor) in the limit of a large number of neurons. I tested this hypothesis by looking at how the number of attractor states scales with the number of neurons. The result is shown in Figure <xref ref-type="fig" rid="F5">5</xref> (right), where the number of attractor states is calculated in the limit of large &#x003C4; and in stationary conditions. The number of attractor states increases as a function of the number of neurons according to a power law &#x0007E;<italic>N</italic><sup>2/3</sup>. Using mathematical arguments, I show in the Appendix that in the limit of large <italic>N</italic> and &#x003C4;, the dynamics indeed converges to a continuous (line) attractor state (see Appendix: &#x0201C;The spontaneous dynamics of neurons&#x0201D;).</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>It is well known that Hebbian synaptic plasticity determines stable and autonomous neural patterns of activity, sometimes called &#x0201C;cell assemblies,&#x0201D; or &#x0201C;attractors&#x0201D; (Hopfield, <xref ref-type="bibr" rid="B39">1982</xref>; Bernacchia and Amit, <xref ref-type="bibr" rid="B7">2007</xref>). These stable states are spontaneous, since they can activate in absence of an external stimulus. In this work, I showed that if sensory adaptation is added to synaptic plasticity, these spontaneous states replicate the activity evoked by the previously experienced stimuli, in proportion to their relative occurrence. In other words, this set of stable states samples precisely the distribution of stimuli, and the neural circuit represents a generative model of the input stimuli (Hinton, <xref ref-type="bibr" rid="B37">2007</xref>, <xref ref-type="bibr" rid="B38">2010</xref>; Fiser et al., <xref ref-type="bibr" rid="B30">2010</xref>; Barra et al., <xref ref-type="bibr" rid="B5">2012</xref>; Clark, <xref ref-type="bibr" rid="B19">2013</xref>). This is consistent with the observation that spontaneous activity of neurons in visual cortex reproduces the stimulus-evoked activity (Kenet et al., <xref ref-type="bibr" rid="B43">2003</xref>; Berkes et al., <xref ref-type="bibr" rid="B6">2011</xref>). According to Bayesian models, neural activity may represent the prior distribution of stimuli, either by encoding the value of the probability (Pouget et al., <xref ref-type="bibr" rid="B62">2003</xref>), or by sampling that distribution (Hoyer and Hyvarinen, <xref ref-type="bibr" rid="B40">2003</xref>). The present work is more consistent with the latter interpretation.</p>
<p>Bayesian models have been applied to a broad variety of problems in Neuroscience (Vilares and Kording, <xref ref-type="bibr" rid="B83">2011</xref>), including multi-sensory integration (Ernst and B&#x000FC;lthoff, <xref ref-type="bibr" rid="B25">2004</xref>; Knill and Pouget, <xref ref-type="bibr" rid="B44">2004</xref>), sensory-motor control and action selection (K&#x000F6;rding and Wolpert, <xref ref-type="bibr" rid="B47">2006</xref>; Berniker et al., <xref ref-type="bibr" rid="B9">2011</xref>). Bayesian models propose that neural circuits maintain a representation of the probability distribution of sensory stimuli (prior), and combine this prior distribution with new incoming information (Fiser et al., <xref ref-type="bibr" rid="B30">2010</xref>). Probability distributions are believed to be represented by the activity of populations of neurons (Pouget et al., <xref ref-type="bibr" rid="B62">2003</xref>). However, while the neural mechanisms of multi-sensory integration are starting to be elucidated (Stein and Stanford, <xref ref-type="bibr" rid="B77">2008</xref>; Angelaki et al., <xref ref-type="bibr" rid="B4">2009</xref>), it remains unknown how the brain forms priors and how it combines them with new information (Vilares and Kording, <xref ref-type="bibr" rid="B83">2011</xref>).</p>
<p>The model studied in this work is characterized by binary neurons and binary synapses, and includes a simple model of sensory adaptation and synaptic plasticity. Because of its simplicity, the model does not account for a range of biological phenomena observed in real neurons and synapses, and any comparison between the model and experimental data may be only qualitative. However, the model can be easily simulated and analyzed, and the results can be understood in a formal mathematical framework. Details of the mathematical analysis of the model are developed in the Appendix. It remains to be tested whether the qualitative conclusions afforded by the model may be generalized to biologically more realistic situations.</p>
<p>A couple of groups studied more realistic neural circuit models including synaptic plasticity and spike-frequency adaptation, and showed that they optimize information transmission (Hennequin et al., <xref ref-type="bibr" rid="B35">2010</xref>), and reproduce visual responses (Zylberberg et al., <xref ref-type="bibr" rid="B89">2011</xref>). However, spike-frequency adaptation is different from the adaptation studied in this work, which is usually referred to as &#x0201C;sensory adaptation&#x0201D; (Wark et al., <xref ref-type="bibr" rid="B85">2007</xref>; Gutkin and Zeldenrust, <xref ref-type="bibr" rid="B34">2014</xref>). Sensory adaptation is a more general phenomenon, and spike-frequency adaptation is one of several possible mechanisms by which it is implemented in neural systems. In this work, I consider sensory adaptation without referring to any specific biological mechanism. This is expressed as a change in the tuning curve of neurons according to the adapting stimulus. In particular, I consider the attraction of the tuning curve by the adapting stimulus, which has been ubiquitously observed in the case of monotonic tuning curves (e.g., sigmoidal, Kohn, <xref ref-type="bibr" rid="B45">2007</xref>; Rieke and Rudd, <xref ref-type="bibr" rid="B64">2009</xref>). In case of unimodal tuning curves (e.g., sine), both repulsion (M&#x000FC;ller et al., <xref ref-type="bibr" rid="B58">1999</xref>; Dragoi et al., <xref ref-type="bibr" rid="B24">2000</xref>) and attraction (Kohn and Movshon, <xref ref-type="bibr" rid="B46">2004</xref>) of the tuning curve by the adapting stimulus has been observed. However, note that repulsion and attraction in those cases is meant with respect to the &#x0201C;preferred stimulus&#x0201D; of a neuron, instead of the &#x0201C;tuning offset.&#x0201D; In the present model, both repulsion and attraction can be observed with respect to the preferred stimulus (see e.g., Figure <xref ref-type="fig" rid="F2">2</xref>).</p>
<p>A substantial assumption of this work is that the representation of the stimulus follows the change in the tuning curves of neurons. In other words, a given neural activity pattern that represents a given stimulus at some moment in time, may represent a different stimulus later, because tuning curves of neurons have changed. In other words, I assume that the &#x0201C;homunculus&#x0201D; is &#x0201C;aware&#x0201D; of adaptation, while perceptual changes seem to be consistent with an &#x0201C;unaware&#x0201D; homunculus (Seri&#x000E8;s et al., <xref ref-type="bibr" rid="B70">2009</xref>), leading to what has been previously referred to as a decoding ambiguity (Fairhall et al., <xref ref-type="bibr" rid="B26">2001</xref>), or coding catastrophe (Schwartz et al., <xref ref-type="bibr" rid="B69">2007</xref>). However, behavioral and physiological observations are also consistent with a homunculus that is initially unaware of adaptation, but slowly catches up after enough time has passed since changes in stimulus encoding. In the present work, this could be modeled by using a faster timescale for adaptation and a slower timescale for plasticity. Future work will investigate the effects of changes in those timescales on the network dynamics and the attractor landscape (Chaudhuri et al., <xref ref-type="bibr" rid="B18">2014</xref>).</p>
<p>In the present model, spontaneous activity converges to an attractor, a stable state of the neural dynamics and, by definition, it stays there indefinitely. However, in presence of noise, neural activity jumps between attractors (Amit, <xref ref-type="bibr" rid="B3">1992</xref>), and the dynamics visits the different attractor states equally often. Furthermore, I showed that in the limit of an infinite number of neurons, the set of attractor states becomes infinite and converges to a continuous (line) attractor spanning the entire stimulus set. In that limit, the dynamics of neurons would not display discrete jumps, rather it would sample exactly and uniformly the continuous space of attractors.</p>
<p>As a final remark, note that in addition to representing a generative model of input stimuli, the model described here represents a solution to the problem of developing a continuous attractor from a set of discrete attractors, as previously investigated by Koulakov et al. (<xref ref-type="bibr" rid="B48">2002</xref>), Renart et al. (<xref ref-type="bibr" rid="B63">2003</xref>), Blumenfeld et al. (<xref ref-type="bibr" rid="B12">2006</xref>), Itskov et al. (<xref ref-type="bibr" rid="B41">2011</xref>).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>The author would like to acknowledge Florin Ionita, Dong Li, Carlotta Martelli, and Benjamin Staar for useful comments on an earlier version of the manuscript.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>2008</year>). <article-title>Theoretical neuroscience rising</article-title>. <source>Neuron</source> <volume>60</volume>, <fpage>489</fpage>&#x02013;<lpage>495</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2008.10.019</pub-id><pub-id pub-id-type="pmid">18995824</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abbott</surname> <given-names>L. F.</given-names></name> <name><surname>Nelson</surname> <given-names>S. B.</given-names></name></person-group> (<year>2000</year>). <article-title>Synaptic plasticity: taming the beast</article-title>. <source>Nat. Neurosci</source>. <volume>3</volume>, <fpage>1178</fpage>&#x02013;<lpage>1183</lpage>. <pub-id pub-id-type="doi">10.1038/81453</pub-id><pub-id pub-id-type="pmid">11127835</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Amit</surname> <given-names>D. J.</given-names></name></person-group> (<year>1992</year>). <source>Modeling Brain Function: The World of Attractor Neural Networks</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Angelaki</surname> <given-names>D. E.</given-names></name> <name><surname>Gu</surname> <given-names>Y.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name></person-group> (<year>2009</year>). <article-title>Multisensory integration: psychophysics, neurophysiology, and computation</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>19</volume>, <fpage>452</fpage>&#x02013;<lpage>458</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2009.06.008</pub-id><pub-id pub-id-type="pmid">19616425</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barra</surname> <given-names>A.</given-names></name> <name><surname>Bernacchia</surname> <given-names>A.</given-names></name> <name><surname>Santucci</surname> <given-names>E.</given-names></name> <name><surname>Contucci</surname> <given-names>P.</given-names></name></person-group> (<year>2012</year>). <article-title>On the equivalence of hopfield networks and boltzmann machines</article-title>. <source>Neural Netw</source>. <volume>34</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2012.06.003</pub-id><pub-id pub-id-type="pmid">22784924</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berkes</surname> <given-names>P.</given-names></name> <name><surname>Orb&#x000E1;n</surname> <given-names>G.</given-names></name> <name><surname>Lengyel</surname> <given-names>M.</given-names></name> <name><surname>Fiser</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment</article-title>. <source>Science</source> <volume>331</volume>, <fpage>83</fpage>&#x02013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1126/science.1195870</pub-id><pub-id pub-id-type="pmid">21212356</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernacchia</surname> <given-names>A.</given-names></name> <name><surname>Amit</surname> <given-names>D. J.</given-names></name></person-group> (<year>2007</year>). <article-title>Impact of spatiotemporally correlated images on the structure of memory</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>104</volume>, <fpage>3544</fpage>&#x02013;<lpage>3549</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0611395104</pub-id><pub-id pub-id-type="pmid">17360679</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernacchia</surname> <given-names>A.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name></person-group> (<year>2013</year>). <article-title>Decorrelation by recurrent inhibition in heterogeneous neural circuits</article-title>. <source>Neural Comput</source>. <volume>25</volume>, <fpage>1732</fpage>&#x02013;<lpage>1767</lpage>. <pub-id pub-id-type="doi">10.1162/NECO-a-00451</pub-id><pub-id pub-id-type="pmid">23607559</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Berniker</surname> <given-names>M.</given-names></name> <name><surname>Wei</surname> <given-names>K.</given-names></name> <name><surname>Kording</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Bayesian approaches to modelling action selection</article-title>, in <source>Modelling Natural Action Selection</source>, eds <person-group person-group-type="editor"><name><surname>Seth</surname> <given-names>A. K.</given-names></name> <name><surname>Prescott</surname> <given-names>T. J.</given-names></name> <name><surname>Bryson</surname> <given-names>J. J.</given-names></name></person-group> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>120</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1017/CBO9780511731525.010</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bi</surname> <given-names>G.-Q.</given-names></name> <name><surname>Poo</surname> <given-names>M.-M.</given-names></name></person-group> (<year>2001</year>). <article-title>Synaptic modification by correlated activity: Hebb&#x00027;s postulate revisited</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>24</volume>, <fpage>139</fpage>&#x02013;<lpage>166</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.24.1.139</pub-id><pub-id pub-id-type="pmid">11283308</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blum</surname> <given-names>K. I.</given-names></name> <name><surname>Abbott</surname> <given-names>L.</given-names></name></person-group> (<year>1996</year>). <article-title>A model of spatial map formation in the hippocampus of the rat</article-title>. <source>Neural Comput</source>. <volume>8</volume>, <fpage>85</fpage>&#x02013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1996.8.1.85</pub-id><pub-id pub-id-type="pmid">8564805</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blumenfeld</surname> <given-names>B.</given-names></name> <name><surname>Preminger</surname> <given-names>S.</given-names></name> <name><surname>Sagi</surname> <given-names>D.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Dynamics of memory representations in networks with novelty-facilitated synaptic plasticity</article-title>. <source>Neuron</source> <volume>52</volume>, <fpage>383</fpage>&#x02013;<lpage>394</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2006.08.016</pub-id><pub-id pub-id-type="pmid">17046699</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Borst</surname> <given-names>A.</given-names></name> <name><surname>Flanagin</surname> <given-names>V. L.</given-names></name> <name><surname>Sompolinsky</surname> <given-names>H.</given-names></name></person-group> (<year>2005</year>). <article-title>Adaptation without parameter change: dynamic gain control in motion detection</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>102</volume>, <fpage>6172</fpage>&#x02013;<lpage>6176</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0500491102</pub-id><pub-id pub-id-type="pmid">15833815</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brenner</surname> <given-names>N.</given-names></name> <name><surname>Bialek</surname> <given-names>W.</given-names></name> <name><surname>de Ruyter van Steveninck</surname> <given-names>R.</given-names></name></person-group> (<year>2000</year>). <article-title>Adaptive rescaling maximizes information transmission</article-title>. <source>Neuron</source> <volume>26</volume>, <fpage>695</fpage>&#x02013;<lpage>702</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)81205-2</pub-id><pub-id pub-id-type="pmid">10896164</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buonomano</surname> <given-names>D. V.</given-names></name> <name><surname>Merzenich</surname> <given-names>M. M.</given-names></name></person-group> (<year>1998</year>). <article-title>Cortical plasticity: from synapses to maps</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>21</volume>, <fpage>149</fpage>&#x02013;<lpage>186</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.21.1.149</pub-id><pub-id pub-id-type="pmid">9530495</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calabresi</surname> <given-names>P.</given-names></name> <name><surname>Picconi</surname> <given-names>B.</given-names></name> <name><surname>Tozzi</surname> <given-names>A.</given-names></name> <name><surname>Di Filippo</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Dopamine-mediated regulation of corticostriatal synaptic plasticity</article-title>. <source>Trends Neurosci</source>. <volume>30</volume>, <fpage>211</fpage>&#x02013;<lpage>219</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2007.03.001</pub-id><pub-id pub-id-type="pmid">17367873</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caporale</surname> <given-names>N.</given-names></name> <name><surname>Dan</surname> <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Spike timing-dependent plasticity: a hebbian learning rule</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>31</volume>, <fpage>25</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.31.060407.125639</pub-id><pub-id pub-id-type="pmid">18275283</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chaudhuri</surname> <given-names>R.</given-names></name> <name><surname>Bernacchia</surname> <given-names>A.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name></person-group> (<year>2014</year>). <article-title>A diversity of localized timescales in network activity</article-title>. <source>eLife</source> <volume>3</volume>:<fpage>e01239</fpage>. <pub-id pub-id-type="doi">10.7554/eLife.01239</pub-id><pub-id pub-id-type="pmid">24448407</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clark</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Whatever next? predictive brains, situated agents, and the future of cognitive science</article-title>. <source>Behav. Brain Sci</source>. <volume>36</volume>, <fpage>181</fpage>&#x02013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X12000477</pub-id><pub-id pub-id-type="pmid">23663408</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clifford</surname> <given-names>C. W.</given-names></name> <name><surname>Webster</surname> <given-names>M. A.</given-names></name> <name><surname>Stanley</surname> <given-names>G. B.</given-names></name> <name><surname>Stocker</surname> <given-names>A. A.</given-names></name> <name><surname>Kohn</surname> <given-names>A.</given-names></name> <name><surname>Sharpee</surname> <given-names>T. O.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Visual adaptation: neural, psychological and computational aspects</article-title>. <source>Vision Res</source>. <volume>47</volume>, <fpage>3125</fpage>&#x02013;<lpage>3131</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2007.08.023</pub-id><pub-id pub-id-type="pmid">17936871</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daw</surname> <given-names>N. D.</given-names></name> <name><surname>Doya</surname> <given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>The computational neurobiology of learning and reward</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>16</volume>, <fpage>199</fpage>&#x02013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2006.03.006</pub-id><pub-id pub-id-type="pmid">16563737</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dean</surname> <given-names>I.</given-names></name> <name><surname>Harper</surname> <given-names>N. S.</given-names></name> <name><surname>McAlpine</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Neural population coding of sound level adapts to stimulus statistics</article-title>. <source>Nat. Neurosci</source>. <volume>8</volume>, <fpage>1684</fpage>&#x02013;<lpage>1689</lpage>. <pub-id pub-id-type="doi">10.1038/nn1541</pub-id><pub-id pub-id-type="pmid">16286934</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doya</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>Reinforcement learning: computational theory and biological mechanisms</article-title>. <source>HFSP J</source> <volume>1</volume>, <fpage>30</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.2976/1.2732246/10.2976/1</pub-id><pub-id pub-id-type="pmid">19404458</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dragoi</surname> <given-names>V.</given-names></name> <name><surname>Sharma</surname> <given-names>J.</given-names></name> <name><surname>Sur</surname> <given-names>M.</given-names></name></person-group> (<year>2000</year>). <article-title>Adaptation-induced plasticity of orientation tuning in adult visual cortex</article-title>. <source>Neuron</source> <volume>28</volume>, <fpage>287</fpage>&#x02013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)00103-3</pub-id><pub-id pub-id-type="pmid">11087001</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>M. O.</given-names></name> <name><surname>B&#x000FC;lthoff</surname> <given-names>H. H.</given-names></name></person-group> (<year>2004</year>). <article-title>Merging the senses into a robust percept</article-title>. <source>Trends Cogn. Sci</source>. <volume>8</volume>, <fpage>162</fpage>&#x02013;<lpage>169</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id><pub-id pub-id-type="pmid">15050512</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fairhall</surname> <given-names>A. L.</given-names></name> <name><surname>Lewen</surname> <given-names>G. D.</given-names></name> <name><surname>Bialek</surname> <given-names>W.</given-names></name> <name><surname>van Steveninck</surname> <given-names>R. R. D. R.</given-names></name></person-group> (<year>2001</year>). <article-title>Efficiency and ambiguity in an adaptive neural code</article-title>. <source>Nature</source> <volume>412</volume>, <fpage>787</fpage>&#x02013;<lpage>792</lpage>. <pub-id pub-id-type="doi">10.1038/35090500</pub-id><pub-id pub-id-type="pmid">11518957</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feldman</surname> <given-names>D. E.</given-names></name></person-group> (<year>2009</year>). <article-title>Synaptic mechanisms for plasticity in neocortex</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>32</volume>, <fpage>33</fpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.051508.135516</pub-id><pub-id pub-id-type="pmid">19400721</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feldman</surname> <given-names>D. E.</given-names></name> <name><surname>Brecht</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Map plasticity in somatosensory cortex</article-title>. <source>Science</source> <volume>310</volume>, <fpage>810</fpage>&#x02013;<lpage>815</lpage>. <pub-id pub-id-type="doi">10.1126/science.1115807</pub-id><pub-id pub-id-type="pmid">16272113</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fiete</surname> <given-names>I. R.</given-names></name> <name><surname>Senn</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>C. Z.</given-names></name> <name><surname>Hahnloser</surname> <given-names>R. H.</given-names></name></person-group> (<year>2010</year>). <article-title>Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity</article-title>. <source>Neuron</source> <volume>65</volume>, <fpage>563</fpage>&#x02013;<lpage>576</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2010.02.003</pub-id><pub-id pub-id-type="pmid">20188660</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fiser</surname> <given-names>J.</given-names></name> <name><surname>Berkes</surname> <given-names>P.</given-names></name> <name><surname>Orb&#x000E1;n</surname> <given-names>G.</given-names></name> <name><surname>Lengyel</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Statistically optimal perception and learning: from behavior to neural representations</article-title>. <source>Trends Cogn. Sci</source>. <volume>14</volume>, <fpage>119</fpage>&#x02013;<lpage>130</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2010.01.003</pub-id><pub-id pub-id-type="pmid">20153683</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallistel</surname> <given-names>C.</given-names></name> <name><surname>Matzel</surname> <given-names>L. D.</given-names></name></person-group> (<year>2013</year>). <article-title>The neuroscience of learning: beyond the hebbian synapse</article-title>. <source>Annual Rev. Psychol</source>. <volume>64</volume>, <fpage>169</fpage>&#x02013;<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-psych-113011-143807</pub-id><pub-id pub-id-type="pmid">22804775</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Kempter</surname> <given-names>R.</given-names></name> <name><surname>van Hemmen</surname> <given-names>J. L.</given-names></name> <name><surname>Wagner</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>1996</year>). <article-title>A neuronal learning rule for sub-millisecond temporal coding</article-title>. <source>Nature</source> <volume>383</volume>, <fpage>76</fpage>&#x02013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1038/383076a0</pub-id><pub-id pub-id-type="pmid">8779718</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Sprekeler</surname> <given-names>H.</given-names></name> <name><surname>Deco</surname> <given-names>G.</given-names></name></person-group> (<year>2012</year>). <article-title>Theory and simulation in neuroscience</article-title>. <source>Science</source> <volume>338</volume>, <fpage>60</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1126/science.1227356</pub-id><pub-id pub-id-type="pmid">23042882</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gutkin</surname> <given-names>B.</given-names></name> <name><surname>Zeldenrust</surname> <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>Spike frequency adaptation</article-title>. <source>Scholarpedia</source> <volume>9</volume>, <fpage>30643</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.30643</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hennequin</surname> <given-names>G.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name> <name><surname>Pfister</surname> <given-names>J.-P.</given-names></name></person-group> (<year>2010</year>). <article-title>Stdp in adaptive neurons gives close-to-optimal information transmission</article-title>. <source>Front. Comput. Neurosci</source>. <volume>4</volume>:<issue>143</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2010.00143</pub-id><pub-id pub-id-type="pmid">21160559</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herz</surname> <given-names>A. V.</given-names></name> <name><surname>Gollisch</surname> <given-names>T.</given-names></name> <name><surname>Machens</surname> <given-names>C. K.</given-names></name> <name><surname>Jaeger</surname> <given-names>D.</given-names></name></person-group> (<year>2006</year>). <article-title>Modeling single-neuron dynamics and computations: a balance of detail and abstraction</article-title>. <source>Science</source> <volume>314</volume>, <fpage>80</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1126/science.1127240</pub-id><pub-id pub-id-type="pmid">17023649</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Learning multiple layers of representation</article-title>. <source>Trends Cogn. Sci</source>. <volume>11</volume>, <fpage>428</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2007.09.004</pub-id><pub-id pub-id-type="pmid">17921042</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2010</year>). <article-title>Learning to represent visual input</article-title>. <source>Philos. Trans. R. Soc. B Biol. Sci</source>. <volume>365</volume>, <fpage>177</fpage>&#x02013;<lpage>184</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2009.0200</pub-id><pub-id pub-id-type="pmid">20008395</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopfield</surname> <given-names>J. J.</given-names></name></person-group> (<year>1982</year>). <article-title>Neural networks and physical systems with emergent collective computational abilities</article-title>. <source>Proc. Natl Acad. Sci. U.S.A</source>. <volume>79</volume>, <fpage>2554</fpage>&#x02013;<lpage>2558</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.79.8.2554</pub-id><pub-id pub-id-type="pmid">6953413</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hoyer</surname> <given-names>P. O.</given-names></name> <name><surname>Hyvarinen</surname> <given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>Interpreting neural response variability as monte carlo sampling of the posterior</article-title>. <source>Adv. Neural Inf. Process. Syst</source>. <fpage>293</fpage>&#x02013;<lpage>300</lpage>.</citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Itskov</surname> <given-names>V.</given-names></name> <name><surname>Hansel</surname> <given-names>D.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Short-term facilitation may stabilize parametric working memory trace</article-title>. <source>Front. Comput. Neurosci</source>. <volume>5</volume>:<issue>40</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2011.00040</pub-id><pub-id pub-id-type="pmid">22028690</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Katz</surname> <given-names>L. C.</given-names></name> <name><surname>Shatz</surname> <given-names>C. J.</given-names></name></person-group> (<year>1996</year>). <article-title>Synaptic activity and the construction of cortical circuits</article-title>. <source>Science</source> <volume>274</volume>, <fpage>1133</fpage>&#x02013;<lpage>1138</lpage>. <pub-id pub-id-type="doi">10.1126/science.274.5290.1133</pub-id><pub-id pub-id-type="pmid">8895456</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kenet</surname> <given-names>T.</given-names></name> <name><surname>Bibitchkov</surname> <given-names>D.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name> <name><surname>Grinvald</surname> <given-names>A.</given-names></name> <name><surname>Arieli</surname> <given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>Spontaneously emerging cortical representations of visual attributes</article-title>. <source>Nature</source> <volume>425</volume>, <fpage>954</fpage>&#x02013;<lpage>956</lpage>. <pub-id pub-id-type="doi">10.1038/nature02078</pub-id><pub-id pub-id-type="pmid">14586468</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knill</surname> <given-names>D. C.</given-names></name> <name><surname>Pouget</surname> <given-names>A.</given-names></name></person-group> (<year>2004</year>). <article-title>The bayesian brain: the role of uncertainty in neural coding and computation</article-title>. <source>Trends Neurosci</source>. <volume>27</volume>, <fpage>712</fpage>&#x02013;<lpage>719</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2004.10.007</pub-id><pub-id pub-id-type="pmid">15541511</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kohn</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Visual adaptation: physiology, mechanisms, and functional benefits</article-title>. <source>J. Neurophysiol</source>. <volume>97</volume>, <fpage>3155</fpage>&#x02013;<lpage>3164</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00086.2007</pub-id><pub-id pub-id-type="pmid">17344377</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kohn</surname> <given-names>A.</given-names></name> <name><surname>Movshon</surname> <given-names>J. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Adaptation changes the direction tuning of macaque mt neurons</article-title>. <source>Nat. Neurosci</source>. <volume>7</volume>, <fpage>764</fpage>&#x02013;<lpage>772</lpage>. <pub-id pub-id-type="doi">10.1038/nn1267</pub-id><pub-id pub-id-type="pmid">15195097</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x000F6;rding</surname> <given-names>K. P.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Bayesian decision theory in sensorimotor control</article-title>. <source>Trends Cogn. Sci</source>. <volume>10</volume>, <fpage>319</fpage>&#x02013;<lpage>326</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2006.05.003</pub-id><pub-id pub-id-type="pmid">16807063</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koulakov</surname> <given-names>A. A.</given-names></name> <name><surname>Raghavachari</surname> <given-names>S.</given-names></name> <name><surname>Kepecs</surname> <given-names>A.</given-names></name> <name><surname>Lisman</surname> <given-names>J. E.</given-names></name></person-group> (<year>2002</year>). <article-title>Model for a robust neural integrator</article-title>. <source>Nat. Neurosci</source>. <volume>5</volume>, <fpage>775</fpage>&#x02013;<lpage>782</lpage>. <pub-id pub-id-type="doi">10.1038/nn893</pub-id><pub-id pub-id-type="pmid">12134153</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krekelberg</surname> <given-names>B.</given-names></name> <name><surname>Van Wezel</surname> <given-names>R. J.</given-names></name> <name><surname>Albright</surname> <given-names>T. D.</given-names></name></person-group> (<year>2006</year>). <article-title>Adaptation in macaque mt reduces perceived speed</article-title>. <source>J. Neurophysiol</source>. <volume>95</volume>, <fpage>255</fpage>&#x02013;<lpage>270</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00750.2005</pub-id><pub-id pub-id-type="pmid">16192331</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lamprecht</surname> <given-names>R.</given-names></name> <name><surname>LeDoux</surname> <given-names>J.</given-names></name></person-group> (<year>2004</year>). <article-title>Structural plasticity and memory</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>5</volume>, <fpage>45</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1301</pub-id><pub-id pub-id-type="pmid">14708003</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Laughlin</surname> <given-names>S. B.</given-names></name></person-group> (<year>1981</year>). <article-title>A simple coding procedure enhances a neurons information capacity</article-title>. <source>Z. Naturforsch</source> <volume>36</volume>, <fpage>51</fpage>. <pub-id pub-id-type="pmid">7303823</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Machens</surname> <given-names>C. K.</given-names></name> <name><surname>Gollisch</surname> <given-names>T.</given-names></name> <name><surname>Kolesnikova</surname> <given-names>O.</given-names></name> <name><surname>Herz</surname> <given-names>A. V.</given-names></name></person-group> (<year>2005</year>). <article-title>Testing the efficiency of sensory coding with optimal stimulus ensembles</article-title>. <source>Neuron</source> <volume>47</volume>, <fpage>447</fpage>&#x02013;<lpage>456</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2005.06.015</pub-id><pub-id pub-id-type="pmid">16055067</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maravall</surname> <given-names>M.</given-names></name> <name><surname>Petersen</surname> <given-names>R. S.</given-names></name> <name><surname>Fairhall</surname> <given-names>A. L.</given-names></name> <name><surname>Arabzadeh</surname> <given-names>E.</given-names></name> <name><surname>Diamond</surname> <given-names>M. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex</article-title>. <source>PLoS Biol</source>. <volume>5</volume>:<fpage>e19</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.0050019</pub-id><pub-id pub-id-type="pmid">17253902</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martin</surname> <given-names>S.</given-names></name> <name><surname>Grimwood</surname> <given-names>P.</given-names></name> <name><surname>Morris</surname> <given-names>R.</given-names></name></person-group> (<year>2000</year>). <article-title>Synaptic plasticity and memory: an evaluation of the hypothesis</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>23</volume>, <fpage>649</fpage>&#x02013;<lpage>711</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.23.1.649</pub-id><pub-id pub-id-type="pmid">10845078</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehta</surname> <given-names>M.</given-names></name> <name><surname>Lee</surname> <given-names>A.</given-names></name> <name><surname>Wilson</surname> <given-names>M.</given-names></name></person-group> (<year>2002</year>). <article-title>Role of experience and oscillations in transforming a rate code into a temporal code</article-title>. <source>Nature</source> <volume>417</volume>, <fpage>741</fpage>&#x02013;<lpage>746</lpage>. <pub-id pub-id-type="doi">10.1038/nature00807</pub-id><pub-id pub-id-type="pmid">12066185</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>K. D.</given-names></name></person-group> (<year>1996</year>). <article-title>Synaptic economics: competition and cooperation in synaptic plasticity</article-title>. <source>Neuron</source> <volume>17</volume>, <fpage>371</fpage>&#x02013;<lpage>374</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)80169-5</pub-id><pub-id pub-id-type="pmid">8816700</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montague</surname> <given-names>P. R.</given-names></name> <name><surname>Hyman</surname> <given-names>S. E.</given-names></name> <name><surname>Cohen</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <article-title>Computational roles for dopamine in behavioural control</article-title>. <source>Nature</source> <volume>431</volume>, <fpage>760</fpage>&#x02013;<lpage>767</lpage>. <pub-id pub-id-type="doi">10.1038/nature03015</pub-id><pub-id pub-id-type="pmid">15483596</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>J. R.</given-names></name> <name><surname>Metha</surname> <given-names>A. B.</given-names></name> <name><surname>Krauskopf</surname> <given-names>J.</given-names></name> <name><surname>Lennie</surname> <given-names>P.</given-names></name></person-group> (<year>1999</year>). <article-title>Rapid adaptation in visual cortex to the structure of images</article-title>. <source>Science</source> <volume>285</volume>, <fpage>1405</fpage>&#x02013;<lpage>1408</lpage>. <pub-id pub-id-type="doi">10.1126/science.285.5432.1405</pub-id><pub-id pub-id-type="pmid">10464100</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nagel</surname> <given-names>K. I.</given-names></name> <name><surname>Doupe</surname> <given-names>A. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Temporal processing and adaptation in the songbird auditory forebrain</article-title>. <source>Neuron</source> <volume>51</volume>, <fpage>845</fpage>&#x02013;<lpage>859</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2006.08.030</pub-id><pub-id pub-id-type="pmid">16982428</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ohzawa</surname> <given-names>I.</given-names></name> <name><surname>Sclar</surname> <given-names>G.</given-names></name> <name><surname>Freeman</surname> <given-names>R. D.</given-names></name></person-group> (<year>1985</year>). <article-title>Contrast gain control in the cats visual system</article-title>. <source>J. Neurophysiol</source>. <volume>54</volume>, <fpage>651</fpage>&#x02013;<lpage>667</lpage>. <pub-id pub-id-type="pmid">4045542</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pawlak</surname> <given-names>V.</given-names></name> <name><surname>Wickens</surname> <given-names>J. R.</given-names></name> <name><surname>Kirkwood</surname> <given-names>A.</given-names></name> <name><surname>Kerr</surname> <given-names>J. N.</given-names></name></person-group> (<year>2010</year>). <article-title>Timing is not everything: neuromodulation opens the stdp gate</article-title>. <source>Front. Synaptic Neurosci</source>. <volume>2</volume>:<issue>146</issue>. <pub-id pub-id-type="doi">10.3389/fnsyn.2010.00146</pub-id><pub-id pub-id-type="pmid">21423532</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pouget</surname> <given-names>A.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name> <name><surname>Zemel</surname> <given-names>R. S.</given-names></name></person-group> (<year>2003</year>). <article-title>Inference and computation with population codes</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>26</volume>, <fpage>381</fpage>&#x02013;<lpage>410</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.26.041002.131112</pub-id><pub-id pub-id-type="pmid">12704222</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Renart</surname> <given-names>A.</given-names></name> <name><surname>Song</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name></person-group> (<year>2003</year>). <article-title>Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks</article-title>. <source>Neuron</source> <volume>38</volume>, <fpage>473</fpage>&#x02013;<lpage>485</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(03)00255-1</pub-id><pub-id pub-id-type="pmid">12741993</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rieke</surname> <given-names>F.</given-names></name> <name><surname>Rudd</surname> <given-names>M. E.</given-names></name></person-group> (<year>2009</year>). <article-title>The challenges natural images pose for visual adaptation</article-title>. <source>Neuron</source> <volume>64</volume>, <fpage>605</fpage>&#x02013;<lpage>616</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2009.11.028</pub-id><pub-id pub-id-type="pmid">20005818</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Royer</surname> <given-names>S.</given-names></name> <name><surname>Par&#x000E9;</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>Conservation of total synaptic weight through balanced synaptic depression and potentiation</article-title>. <source>Nature</source> <volume>422</volume>, <fpage>518</fpage>&#x02013;<lpage>522</lpage>. <pub-id pub-id-type="doi">10.1038/nature01530</pub-id><pub-id pub-id-type="pmid">12673250</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sakmann</surname> <given-names>B.</given-names></name> <name><surname>Creutzfeldt</surname> <given-names>O. D.</given-names></name></person-group> (<year>1969</year>). <article-title>Scotopic and mesopic light adaptation in the cat&#x00027;s retina</article-title>. <source>Pfl&#x000FC;gers Arch</source>. <volume>313</volume>, <fpage>168</fpage>&#x02013;<lpage>185</lpage>. <pub-id pub-id-type="doi">10.1007/BF00586245</pub-id><pub-id pub-id-type="pmid">5390975</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanes</surname> <given-names>J. R.</given-names></name> <name><surname>Lichtman</surname> <given-names>J. W.</given-names></name></person-group> (<year>1999</year>). <article-title>Development of the vertebrate neuromuscular junction</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>22</volume>, <fpage>389</fpage>&#x02013;<lpage>442</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.22.1.389</pub-id><pub-id pub-id-type="pmid">10202544</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Savin</surname> <given-names>C.</given-names></name> <name><surname>Joshi</surname> <given-names>P.</given-names></name> <name><surname>Triesch</surname> <given-names>J.</given-names></name></person-group> (<year>2010</year>). <article-title>Independent component analysis in spiking neurons</article-title>. <source>PLoS Comput. Biol</source>. <volume>6</volume>:<fpage>e1000757</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1000757</pub-id><pub-id pub-id-type="pmid">20421937</pub-id></citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwartz</surname> <given-names>O.</given-names></name> <name><surname>Hsu</surname> <given-names>A.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Space and time in visual context</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>8</volume>, <fpage>522</fpage>&#x02013;<lpage>535</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2155</pub-id><pub-id pub-id-type="pmid">17585305</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seri&#x000E8;s</surname> <given-names>P.</given-names></name> <name><surname>Stocker</surname> <given-names>A. A.</given-names></name> <name><surname>Simoncelli</surname> <given-names>E. P.</given-names></name></person-group> (<year>2009</year>). <article-title>Is the homunculus aware of sensory adaptation?</article-title> <source>Neural Comput</source>. <volume>21</volume>, <fpage>3271</fpage>&#x02013;<lpage>3304</lpage>. <pub-id pub-id-type="doi">10.1162/neco.2009.09-08-869</pub-id><pub-id pub-id-type="pmid">19686064</pub-id></citation>
</ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seung</surname> <given-names>H. S.</given-names></name></person-group> (<year>2003</year>). <article-title>Learning in spiking neural networks by reinforcement of stochastic synaptic transmission</article-title>. <source>Neuron</source> <volume>40</volume>, <fpage>1063</fpage>&#x02013;<lpage>1073</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(03)00761-X</pub-id><pub-id pub-id-type="pmid">14687542</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seung</surname> <given-names>H. S.</given-names></name></person-group> (<year>2009</year>). <article-title>Reading the book of memory: sparse sampling versus dense mapping of connectomes</article-title>. <source>Neuron</source> <volume>62</volume>, <fpage>17</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2009.03.020</pub-id><pub-id pub-id-type="pmid">19376064</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shapley</surname> <given-names>R.</given-names></name> <name><surname>Enroth-Cugell</surname> <given-names>C.</given-names></name></person-group> (<year>1984</year>). <article-title>Visual adaptation and retinal gain controls</article-title>. <source>Prog. Retinal Res</source>. <volume>3</volume>, <fpage>263</fpage>&#x02013;<lpage>346</lpage>. <pub-id pub-id-type="doi">10.1016/0278-4327(84)90011-7</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smirnakis</surname> <given-names>S. M.</given-names></name> <name><surname>Berry</surname> <given-names>M. J.</given-names></name> <name><surname>Warland</surname> <given-names>D. K.</given-names></name> <name><surname>Bialek</surname> <given-names>W.</given-names></name> <name><surname>Meister</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Adaptation of retinal processing to image contrast and spatial scale</article-title>. <source>Nature</source> <volume>386</volume>, <fpage>69</fpage>&#x02013;<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1038/386069a0</pub-id><pub-id pub-id-type="pmid">9052781</pub-id></citation>
</ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soltani</surname> <given-names>A.</given-names></name> <name><surname>Wang</surname> <given-names>X.-J.</given-names></name></person-group> (<year>2008</year>). <article-title>From biophysics to cognition: reward-dependent adaptive choice behavior</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>18</volume>, <fpage>209</fpage>&#x02013;<lpage>216</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2008.07.003</pub-id><pub-id pub-id-type="pmid">18678255</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>2001</year>). <article-title>Cortical development and remapping through spike timing-dependent plasticity</article-title>. <source>Neuron</source> <volume>32</volume>, <fpage>339</fpage>&#x02013;<lpage>350</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(01)00451-2</pub-id><pub-id pub-id-type="pmid">11684002</pub-id></citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stein</surname> <given-names>B. E.</given-names></name> <name><surname>Stanford</surname> <given-names>T. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Multisensory integration: current issues from the perspective of the single neuron</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>9</volume>, <fpage>255</fpage>&#x02013;<lpage>266</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2331</pub-id><pub-id pub-id-type="pmid">18354398</pub-id></citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Surmeier</surname> <given-names>D. J.</given-names></name> <name><surname>Plotkin</surname> <given-names>J.</given-names></name> <name><surname>Shen</surname> <given-names>W.</given-names></name></person-group> (<year>2009</year>). <article-title>Dopamine and synaptic plasticity in dorsal striatal circuits controlling action selection</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>19</volume>, <fpage>621</fpage>&#x02013;<lpage>628</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2009.10.003</pub-id><pub-id pub-id-type="pmid">19896832</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tetzlaff</surname> <given-names>C.</given-names></name> <name><surname>Kolodziejski</surname> <given-names>C.</given-names></name> <name><surname>Markelic</surname> <given-names>I.</given-names></name> <name><surname>W&#x000F6;rg&#x000F6;tter</surname> <given-names>F.</given-names></name></person-group> (<year>2012</year>). <article-title>Time scales of memory, learning, and plasticity</article-title>. <source>Biol. Cybern</source>. <volume>106</volume>, <fpage>715</fpage>&#x02013;<lpage>726</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-012-0529-z</pub-id><pub-id pub-id-type="pmid">23160712</pub-id></citation>
</ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toyoizumi</surname> <given-names>T.</given-names></name> <name><surname>Pfister</surname> <given-names>J.-P.</given-names></name> <name><surname>Aihara</surname> <given-names>K.</given-names></name> <name><surname>Gerstner</surname> <given-names>W.</given-names></name></person-group> (<year>2005</year>). <article-title>Generalized bienenstock&#x02013;cooper&#x02013;munro rule for spiking neurons that maximizes information transmission</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>102</volume>, <fpage>5239</fpage>&#x02013;<lpage>5244</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0500495102</pub-id><pub-id pub-id-type="pmid">15795376</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>1990</year>). <article-title>Associative memory in neural networks with binary synapses</article-title>. <source>Mod. Phys. Lett. B</source> <volume>4</volume>, <fpage>713</fpage>&#x02013;<lpage>716</lpage>. <pub-id pub-id-type="doi">10.1142/S0217984990000891</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Turrigiano</surname> <given-names>G. G.</given-names></name> <name><surname>Nelson</surname> <given-names>S. B.</given-names></name></person-group> (<year>2004</year>). <article-title>Homeostatic plasticity in the developing nervous system</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>5</volume>, <fpage>97</fpage>&#x02013;<lpage>107</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1327</pub-id><pub-id pub-id-type="pmid">14735113</pub-id></citation>
</ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vilares</surname> <given-names>I.</given-names></name> <name><surname>Kording</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Bayesian models: the structure of the world, uncertainty, behavior, and the brain</article-title>. <source>Ann. N.Y. Acad. Sci</source>. <volume>1224</volume>, <fpage>22</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1111/j.1749-6632.2011.05965.x</pub-id><pub-id pub-id-type="pmid">21486294</pub-id></citation>
</ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wainwright</surname> <given-names>M. J.</given-names></name></person-group> (<year>1999</year>). <article-title>Visual adaptation as optimal information transmission</article-title>. <source>Vision Res</source>. <volume>39</volume>, <fpage>3960</fpage>&#x02013;<lpage>3974</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(99)00101-7</pub-id><pub-id pub-id-type="pmid">10748928</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wark</surname> <given-names>B.</given-names></name> <name><surname>Lundstrom</surname> <given-names>B. N.</given-names></name> <name><surname>Fairhall</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Sensory adaptation</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>17</volume>, <fpage>423</fpage>&#x02013;<lpage>429</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2007.07.001</pub-id><pub-id pub-id-type="pmid">17714934</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Webster</surname> <given-names>M. A.</given-names></name></person-group> (<year>2011</year>). <article-title>Adaptation and visual coding</article-title>. <source>J. Vis</source>. <volume>11</volume>, <fpage>3</fpage>. <pub-id pub-id-type="doi">10.1167/11.5.3</pub-id><pub-id pub-id-type="pmid">21602298</pub-id></citation>
</ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wickens</surname> <given-names>J. R.</given-names></name> <name><surname>Reynolds</surname> <given-names>J. N.</given-names></name> <name><surname>Hyland</surname> <given-names>B. I.</given-names></name></person-group> (<year>2003</year>). <article-title>Neural mechanisms of reward-related motor learning</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>13</volume>, <fpage>685</fpage>&#x02013;<lpage>690</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2003.10.013</pub-id><pub-id pub-id-type="pmid">14662369</pub-id></citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Williams</surname> <given-names>A. H.</given-names></name> <name><surname>O&#x00027;Leary</surname> <given-names>T.</given-names></name> <name><surname>Marder</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>Homeostatic regulation of neuronal excitability</article-title>. <source>Scholarpedia</source> <volume>8</volume>, <fpage>1656</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.1656</pub-id></citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zylberberg</surname> <given-names>J.</given-names></name> <name><surname>Murphy</surname> <given-names>J. T.</given-names></name> <name><surname>DeWeese</surname> <given-names>M. R.</given-names></name></person-group> (<year>2011</year>). <article-title>A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of v1 simple cell receptive fields</article-title>. <source>PLoS Comput. Biol</source>. <volume>7</volume>:<fpage>e1002250</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1002250</pub-id><pub-id pub-id-type="pmid">22046123</pub-id></citation>
</ref>
</ref-list>
<app-group>
<app id="A1">
<title>Appendix</title>
<p>In this section, I derive useful expressions of the following quantities: (1) The stationary distribution of synaptic strengths; (2) The stationary distribution of tuning offsets; (3) The stable states of the spontaneous dynamics. Those quantities are instrumental for understanding the results presented in the main text.</p>
<sec>
<title>The dynamics of synaptic strengths</title>
<p>Synaptic plasticity occurs during the presentation of input stimuli, and the statistics of those stimuli affect the synaptic strengths. The goal of this section is to derive a simple expression for the stationary average of synaptic strengths, as a function of the statistics of input stimuli.</p>
<p>The probability that at time <italic>t</italic> a synapse is potentiated (<italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x0002B;1) is denoted by <italic>u</italic>(<italic>t</italic>). Conversely, the probability that the synapse is depressed (<italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x02212;1) is denoted by <italic>v</italic>(<italic>t</italic>). The normalization condition holds, namely</p>
<disp-formula id="E11"><label>(A1)</label><mml:math id="M11"><mml:mrow><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math></disp-formula>
<p>Using these probabilities, the average synaptic strength is equal to</p>
<disp-formula id="E12"><label>(A2)</label><mml:math id="M12"><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The angular brackets denote an average over the distribution of synaptic stenghts. Using Equations (A1), (A2), the probabilities <italic>u</italic>(<italic>t</italic>), <italic>v</italic>(<italic>t</italic>) are written in terms of the average, i.e.,</p>
<disp-formula id="E13"><label>(A3)</label><mml:math id="M13"><mml:mrow><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></disp-formula>
<disp-formula id="E14"><label>(A4)</label><mml:math id="M14"><mml:mrow><mml:mi>v</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>In order to derive a master equation which describes the evolution of the average synaptic matrix &#x02329;<italic>J<sub>ij</sub></italic>&#x0232A;, transition probabilities must be defined. The probability of potentiation, namely the probability of the transition from <italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x02212;1 to <italic>J<sub>ij</sub></italic>(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0003D; &#x0002B;1 is denoted as <italic>w</italic><sub>&#x0002B;</sub>. Conversely, the probability of depression, namely the probability of the transition from <italic>J<sub>ij</sub></italic>(<italic>t</italic>) &#x0003D; &#x0002B;1 to <italic>J<sub>ij</sub></italic>(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0003D; &#x02212;1 is denoted as <italic>w</italic><sub>&#x02212;</sub>. The evolution of <italic>u</italic>(<italic>t</italic>) is given by the master equation</p>
<disp-formula id="E15"><label>(A5)</label><mml:math id="M15"><mml:mrow><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mi>v</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub><mml:mi>u</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>In words, the change in the fraction of potentiated synapses during a time step &#x00394;<italic>t</italic> is equal to the fraction of depressed synapses that are potentiated minus the fraction of potentiated synapses that are depressed in that time step. Substituting Equations (A3), (A4) into Equation (A5), it is possible to write the equation in terms of the average synaptic efficacy, namely</p>
<disp-formula id="E16"><label>(A6)</label><mml:math id="M16"><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The transition probabilities depend on neural activity according to a Hebbian prescription, given by Equations (6), (7). Substituting those equations, the above Equation (A6) is rewritten as</p>
<disp-formula id="E17"><label>(A7)</label><mml:math id="M17"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This equation shows how the average synaptic strength changes according to neural activity of the pre and post-synaptic neurons. For simplicity, in the following I use &#x00394;<italic>t</italic> &#x0003D; 1. The solution of Equation (A7) is equal to</p>
<disp-formula id="E18"><label>(A8)</label><mml:math id="M18"><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mtext>&#x0200B;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msup><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math></disp-formula>
<p>In the following, I assume that time <italic>t</italic> is large enough for the initial condition to decay away, namely <italic>t</italic> &#x0226B; &#x003C4;. Plasticity occurs during stimulus presentation, which imposes a specific pattern of neural activity. During presentation of stimulus &#x003B1;(<italic>t</italic>) at time <italic>t</italic>, neural activity is equal to</p>
<disp-formula id="E19"><label>(A9)</label><mml:math id="M19"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Substituting Equation (A9) into Equation (A8), the expression of the average synaptic strengths is rewritten as</p>
<disp-formula id="E20"><label>(A10)</label><mml:math id="M20"><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x003C4;</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mstyle><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mo>&#x02032;</mml:mo></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As explained in the Materials and Methods section, in each trial one stimulus &#x003B1; is drawn at random from a distribution <italic>p</italic>(&#x003B1;). In the limit of large <italic>t</italic> and &#x003C4; (and <italic>t</italic> &#x0226B; &#x003C4;), the sum over <italic>t</italic>&#x02032; in Equation (A10) can be replaced by an integral over the distribution of the stimulus values, and the stationary synaptic strengths are equal to</p>
<disp-formula id="E21"><label>(A11)</label><mml:math id="M21"><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
</sec>
<sec>
<title>The dynamics of tuning offsets</title>
<p>In addition to plasticity, during the presentation of input stimuli also adaptation takes place, and determines a change in the tuning offsets of neurons. The goal of this section is to derive a simple expression for the distribution of tuning offsets as a function of the statistics of input stimuli. The final result is that, under certain assumptions, the distribution of tuning offsets is equal to the distribution of input stimuli.</p>
<p>The dynamics of tuning offsets is defined by Equation (8). That equation is rewritten here with a minor rearrangement of terms</p>
<disp-formula id="E22"><label>(A12)</label><mml:math id="M22"><mml:mrow><mml:mi>&#x003C4;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup><mml:mo>&#x02212;</mml:mo><mml:mi>&#x00398;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>I assume that after enough time has passed, this dynamics settles into a stationary state. In particular, this assumption will be met in the limit of large &#x003C4;. The stationary state is defined by the condition that the temporal average of &#x003BC;<sub><italic>i</italic></sub>(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) is equal to the temporal average of &#x003BC;<sub><italic>i</italic></sub>(<italic>t</italic>). In other words, the temporal average of the left hand side is zero. Therefore, the temporal average of the right hand side must be also zero. The temporal average of the &#x00398; step function is equal to the fraction of times in which its argument is positive, namely the probability that &#x003BC;<sub><italic>i</italic></sub> &#x0003E; &#x003B1;. On the other hand, &#x003BC;<sup>0</sup><sub><italic>i</italic></sub> is constant and given by Equation (9). Therefore, the stationary condition reads</p>
<disp-formula id="E23"><label>(A13)</label><mml:math id="M23"><mml:mrow><mml:mtext>Prob</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0003E;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>Note that &#x003BC;<sup>0</sup><sub><italic>i</italic></sub> represents an ordered and uniform tiling of the interval (0,1) by neurons. At the stationary state, when the values of &#x003BC;<sub><italic>i</italic></sub> are relatively fixed and stable, the left hand side is equal to the cumulative distribution function of the input stimuli &#x003B1;. I denote the density and cumulative distribution function of the input stimuli by, respectively, <italic>p</italic>(&#x003B1;) and <italic>P</italic>(&#x003B1;). Therefore, Equation (A13) is rewritten as</p>
<disp-formula id="E24"><label>(A14)</label><mml:math id="M24"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup></mml:mrow></mml:math></disp-formula>
<p>By definition of cumulative distribution function, <italic>P</italic> can be inverted to obtain the tuning offsets of neurons</p>
<disp-formula id="E25"><label>(A15)</label><mml:math id="M25"><mml:mrow><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mi>P</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:msubsup><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi><mml:mn>0</mml:mn></mml:msubsup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>In the limit of large <italic>N</italic>, I can substitute the index <italic>i</italic> with a label <italic>y</italic> &#x0003D; &#x003BC;<sup>0</sup><sub><italic>i</italic></sub> spanning a continuum of neurons in the interval (0,1). Therefore,</p>
<disp-formula id="E26"><label>(A16)</label><mml:math id="M26"><mml:mrow><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msup><mml:mi>P</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Then, it is easy to show that the tuning offsets of neurons follow the same distribution of input stimuli. By taking the derivative of Equation (A16) with respect to <italic>y</italic>, I can calculate the volume <italic>dy</italic> of neurons with tuning offsets included in a given interval <italic>d</italic>&#x003BC;, that is equal to</p>
<disp-formula id="E27"><label>(A17)</label><mml:math id="M27"><mml:mrow><mml:mi>d</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>d</mml:mi><mml:mi>&#x003BC;</mml:mi></mml:mrow></mml:math></disp-formula>
<p>Therefore, the density function of tuning offsets across neurons is equal to the <italic>p</italic>(&#x003BC;), which is equal to the density of input stimuli. I denote the density of tuning offsets as <italic>g</italic>(&#x003BC;) and the density of input stimuli as <italic>p</italic>(&#x003B1;). Therefore, the above result is written as</p>
<disp-formula id="E28"><label>(A18)</label><mml:math id="M28"><mml:mrow><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
</sec>
<sec>
<title>The spontaneous dynamics of neurons</title>
<p>The goal of this section is to derive a simple equation describing the (stable) fixed points of the retrieval dynamics of neurons, that is the dynamics in absence of stimulus and plasticity. However, I assume that stimulation and plasticity have occurred before this retrieval stage, and synaptic strengths have been modified by the distribution of input stimuli through Equation (A11). Therefore, the distribution of input stimuli in turn affects the neural dynamics during retrieval. Neural dynamics also depends on the distribution of tuning offsets.</p>
<p>During the retrieval phase there is no input stimulus, therefore the total current <italic>I<sub>i</sub></italic> is equal to the internal current, given by Equation (5). In the limit of a large number of neurons <italic>N</italic>, I can substitute the synaptic strength <italic>J<sub>ij</sub></italic> with its average, therefore the current is equal to</p>
<disp-formula id="E29"><label>(A19)</label><mml:math id="M29"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>j</mml:mi></mml:munder><mml:mrow><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mtext>&#x02009;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle></mml:mrow></mml:math></disp-formula>
<p>I substitute the expression for the average synaptic strength, Equation (A11), into Equation (A19), and the new expression for the current is</p>
<disp-formula id="E30"><label>(A20)</label><mml:math id="M30"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mi>&#x003BE;</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>As mentioned above, during the retrieval phase there is no input stimulus. Nevertheless, I assume that neural activity is equal to a pattern that would be obtained by stimulating the network with a given stimulus &#x003BD;. Namely, I assume that</p>
<disp-formula id="E31"><label>(A21)</label><mml:math id="M31"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The value of &#x003BD; depends on time and is unknown. This assumption is verified below by a self-consistent argument. Under this assumption, the new expression of the current is obtained by substituting Equation (A21) into Equation (A20), to find</p>
<disp-formula id="E32"><label>(A22)</label><mml:math id="M32"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mi>&#x003BE;</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where I dropped the explicit dependence on time. In the limit of a large number of neurons <italic>N</italic>, I can substitute the sum over the index <italic>j</italic> with an integral over the tuning offsets of neurons, &#x003BC; &#x0003D; &#x003BC;<sub><italic>j</italic></sub>, according to the distribution of such tuning offsets, which I denote by <italic>g</italic>(&#x003BC;). Namely, I can use the following expression</p>
<disp-formula id="E33"><label>(A23)</label><mml:math id="M33"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mi>&#x003BE;</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003BC;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>By substituting Equation (A23) into Equation (A22), the new expression for the current is</p>
<disp-formula id="E34"><label>(A24)</label><mml:math id="M34"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>d</mml:mi><mml:mi>&#x003BC;</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BC;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This expression is useful for studying the stable states of neural dynamics. However, first the assumption made in Equation (A21) must be checked. The following analysis is valid in case of a sigmoidal tuning curve, Equation (3), which implies &#x003BE;(&#x003BC;, &#x003BD;) &#x0003D; sign(&#x003BC; &#x02212; &#x003BD;), but similar results are obtained in case of a periodic tuning curve Equation (4).</p>
<p>The assumption in Equation (A21) states that neural activity in absence of the stimulus is equal to the activity in presence of a stimulus, for a given unknown stimulus &#x003BD; at time <italic>t</italic>. In order to check this assumption, I follow a self-consistent argument: given that the assumption is true at time <italic>t</italic>, test whether it remains true at time <italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>. In other words, given Equation (A21), it must be true that</p>
<disp-formula id="E35"><label>(A25)</label><mml:math id="M35"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Note that the &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) is not given and can be any value in the interval (0,1). Equation (A24) was derived under the assumption of Equation (A21), therefore I check whether Equation (A24) implies Equation (A25). I substitute the expression for the function &#x003BE;, Equation (10) in Equation (A24). After calculating the integral explicitly, I express the current in terms of the cumulative distribution functions of the input stimuli <italic>P</italic> and tuning offsets <italic>G</italic>, whose density functions are, respectively, <italic>p</italic> and <italic>g</italic>. The result is</p>
<disp-formula id="E36"><mml:math id="M36"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>+</mml:mo><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02212;</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo stretchy='false'>[</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>+</mml:mo><mml:mo stretchy='false'>[</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mrow><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Note that the term in curly brackets is small for &#x003BC;<sub><italic>i</italic></sub> &#x0007E; &#x003BD;. In that case, the current is approximated by</p>
<disp-formula id="E37"><label>(A26)</label><mml:math id="M37"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02243;</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This is a monotonically decreasing function of &#x003BC;<sub><italic>i</italic></sub>, because the cumulative distribution <italic>P</italic> is monotonically increasing by definition. In addition, the current <italic>I<sub>i</sub></italic> takes opposite values at the extreme points &#x003BC;<sub><italic>i</italic></sub> &#x0003D; 0 and &#x003BC;<sub><italic>i</italic></sub> &#x0003D; 1. Therefore, it must vanish at a unique value of &#x003BC;<sub><italic>i</italic></sub>, provided that this value is close enough to &#x003BD; to justify the approximation &#x003BC;<italic><sub>i</sub></italic> &#x0007E; &#x003BD;. I define this value as &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>). As a consequence, using Equation (2), the neural activity is equal to</p>
<disp-formula id="E38"><label>(A27)</label><mml:math id="M38"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mtext>sign</mml:mtext><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x003BE;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>and Equation (A25) is demonstrated. Therefore, the assumption that neural activity in absence of the stimulus takes the form of Equation (A21) is self-consistent. The value of &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) can be calculated as a function of the value of &#x003BD;(<italic>t</italic>). Under the assumption that the two values are close, &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0007E; &#x003BD;(<italic>t</italic>), this can be calculated by replacing &#x003BC;<sub><italic>i</italic></sub> with &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) in Equation (A26). Therefore,</p>
<disp-formula id="E39"><label>(A28)</label><mml:math id="M39"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mi>&#x00394;</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></disp-formula>
<p>In order to find the fixed points of neural activity, I impose that &#x003BD;(<italic>t</italic> &#x0002B; &#x00394;<italic>t</italic>) &#x0003D; &#x003BD;(<italic>t</italic>) &#x0003D; &#x003BD;. The value of &#x003BD; is the fixed point of the retrieval dynamics. This condition is equivalent to</p>
<disp-formula id="E40"><label>(A29)</label><mml:math id="M40"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mn>0</mml:mn><mml:mn>1</mml:mn></mml:msubsup><mml:mi>d</mml:mi></mml:mrow></mml:mstyle><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>G</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>]</mml:mo><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></disp-formula>
<p>This equation can be solved to find the fixed points &#x003BD; for a given distribution of input stimuli <italic>p</italic> and of tuning offsets <italic>g</italic>. By taking the derivative of Equation (A28) with respect to &#x003BD;, it is easy to check that a fixed point &#x003BD; is stable if and only if</p>
<disp-formula id="E41"><label>(A30)</label><mml:math id="M41"><mml:mrow><mml:mi>p</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BD;</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Note that if the two distributions <italic>p</italic> and <italic>g</italic> are exactly equal, as happens at the stationary state of the dynamics of tuning offsets, Equation (A18), then all possible values of &#x003BD; correspond to marginally stable fixed points. This corresponds to a continuous attractor state.</p>
</sec>
</app>
</app-group>
</back>
</article>
