GENERAL COMMENTARY article

Front. Comput. Neurosci., 10 March 2015

Volume 9 - 2015 | https://doi.org/10.3389/fncom.2015.00023

Commentary on Structured chaos shapes spike-response noise entropy in balanced neural networks, by Lajoie, Thivierge, and Shea-Brown

    PJ

    Peter J. Thomas *

  • Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University Cleveland, OH, USA

Article metrics

View details

3k

Views

1,2k

Downloads

Animals with nervous systems generate complex adaptive behaviors in part through the computational capabilities arising from large networks of interconnected neurons in their brains (Churchland and Sejnowski, 1992). Although a full description of the nervous system would take into account the interactions of central circuits with sensory and motor systems (Chiel and Beer, 1997) it is more common to consider central circuitry in isolation. The individual nerve cells and synaptic junctions that comprise biological neural networks are spatially extended structures with fundamentally stochastic dynamics on a range of spatial and temporal scales (Andersen et al., 2006; Carnevale and Hines, 2006). Nevertheless, much progress has been made in understanding the repertoire of neural behavior through simplified deterministic one dimensional “phase” models such as the Ermentrout-Kopell canonical model (Ermentrout, 1996; Brown et al., 2004; Ermentrout, 2008)1.

Even if we restrict attention to isolated networks of deterministic, instantaneously coupled phase models, we confront significant challenges. The behavior of such networks can be chaotic, as evidenced by the divergence of nearby trajectories (positive Lyapunov exponents). If we consider such a “chaotic network” driven by a collection of input signals, it is natural to ask how the intrinsic variability related to the chaotic dynamics impacts the networks' computational capabilities. It is equally natural to view the system as a communications channel. With the input signals drawn from some specified ensemble, and the output taken as the spike trains of (some or all of) the neurons, the mutual information between the input and output ensembles would be of great interest. However, this quantity is difficult to obtain, either analytically or numerically.

In Lajoie et al. (2014), the authors further the analysis of information processing in chaotic deterministic networks by formulating a computationally tractable upper bound on the spike-train noise entropy, building on Monteforte and Wolf (2010) and Lajoie et al. (2013). They study a network of deterministic canonical Ermentrout-Kopell “theta” neurons (Ermentrout and Kopell, 1986) with an ad-hoc interaction function. The network connectivity is fixed, sparse and random. Each neuron is driven by a quenched white noise injected current input of the form Ii(t) = η + ϵ dWi,t/dt. As the authors (and others) have shown previously, the spontaneous activity (i.e., with ϵ = 0) in this class of networks exhibits chaotic behavior. It has been observed that applying an input to such networks (i.e., setting ϵ > 0) can reduce the apparent irregularity of the spike train ensemble. The spike train entropy quantifies this reduction in variability; the authors obtain an upper bound on this quantity through a state space partitioning construction that takes advantage of the Kolmogorov-Sinai entropy, which is given in turn by the Lyapunov spectrum, which the authors estimate numerically. They show convincingly that the KS entropy of the spike trains is roughly an order of magnitude smaller than what one would expect from a naive estimate based on the single-cell noise entropy. Their results help make rigorous the observation that the application of a driving stimulus reduces the variability of the resulting spike trains, although the networks remain chaotic.

While this result is a substantive contribution, it is still some steps removed from telling us the mutual information I(X:Y) = H(Y) − H(Y|X) between an ensemble of inputs, X, and the corresponding ensemble of outputs, Y. The authors' result gives a bound on H(Y|x) for a specific realization of the frozen noise inputs x ϵ X. Because the system is ergodic, this estimate applies as well to the mean entropy H(Y|X) [as discussed in Lajoie et al. (2013)]. However, as the authors point out, one cannot replace the entropy H(Y) with H(Y|0), the entropy when the input fluctuations are switched off, since (as they convincingly demonstrate) turning on the input (ϵ > 0) significantly changes the entropy. The entropy that would be needed for calculating the mutual information would be the spike train entropy for the ensemble unconditioned on a specific input—but with an ensemble of different white noises all with fixed ϵ > 0. It would be very interesting if one could investigate how I(X:Y) varied as a function of ϵ; for instance, whether the mutual information changes smoothly or whether there is evidence for some kind of information processing phase transition. The authors' contribution provides a valuable step along the way to a deeper understanding of the impact of chaotic dynamics on computations in deterministic neural networks.

Statements

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1.^As pointed out in Wolf et al. (2014), the choice of 1D model can strongly influence the entropy production properties of the resulting network.

References

  • 1

    AndersenP.MorrisR.AmaralD.BlissT.O'KeefeJ. (2006). The Hippocampus Book. Oxford University Press.

  • 2

    BrownE.MoehlisJ.HolmesP. (2004). On the phase reduction and response dynamics of neural oscillator populations. Neural Comput. 16, 673715. 10.1162/089976604322860668

  • 3

    CarnevaleN. T.HinesM. L. (2006). The NEURON Book. New York, NY: Cambridge University Press.

  • 4

    ChielH. J.BeerR. D. (1997). The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends Neurosci. 20, 553557. 10.1016/S0166-2236(97)01149-1

  • 5

    ChurchlandP.SejnowskiT. J. (1992). The Computational Brain. Cambridge, MA: MIT Press.

  • 6

    ErmentroutB. (2008). Ermentrout-Kopell canonical model. Scholarpedia3:1398. 10.4249/scholarpedia.1398

  • 7

    ErmentroutB. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 9791001. 10.1162/neco.1996.8.5.979

  • 8

    ErmentroutG. B.KopellN. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233253. 10.1137/0146017

  • 9

    LajoieG.LinK. K.Shea-BrownE. (2013). Chaos and reliability in balanced spiking networks with temporal drive. Phys. Rev. E87:052901. 10.1103/PhysRevE.87.052901

  • 10

    LajoieG.ThiviergeJ.-P.Shea-BrownE. (2014). Structured chaos shapes spike-response noise entropy in balanced neural networks. Front. Comput. Neurosci. 8:123. 10.3389/fncom.2014.00123

  • 11

    MonteforteM.WolfF. (2010). Dynamical entropy production in spiking neuron networks in the balanced state. Phys. Rev. Lett. 105:268104. 10.1103/PhysRevLett.105.268104

  • 12

    WolfF.EngelkenR.Puelma-TouzelM.WeidingerJ. D. F.NeefA. (2014). Dynamical models of cortical circuits. Curr. Opin. Neurobiol. 25, 228236. 10.1016/j.conb.2014.01.017

Summary

Keywords

spike-response noise entroy, balanced neural networks, information theory, chaos, mutual information

Citation

Thomas PJ (2015) Commentary on Structured chaos shapes spike-response noise entropy in balanced neural networks, by Lajoie, Thivierge, and Shea-Brown. Front. Comput. Neurosci. 9:23. doi: 10.3389/fncom.2015.00023

Received

02 October 2014

Accepted

08 February 2015

Published

10 March 2015

Volume

9 - 2015

Edited by

David Hansel, University of Paris, France

Reviewed by

Demian Battaglia, Max Planck Institute for Dynamics and Self-Organization, Germany; Guillaume Lajoie, Max Planck Institute for Dynamics and Self-Organization, Germany

Copyright

*Correspondence:

This article was submitted to the journal Frontiers in Computational Neuroscience.

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics