Event Abstract

Fisher information in correlated networks

  • 1 UCL, Gatsby Computational Neuroscience Unit, United Kingdom

The information in a network of neurons depends on its correlational structure, but not in any systematic way: correlations can increase information, decrease it, or have no effect at all [1]. At least those are the theoretical possibilities. But what happens in realistic networks, where the correlational structure is not arbitrary, but is determined by recurrent connectivity and external input? We find that the parameters that maximize information also maximize correlations. Thus, for the model we consider, high correlations are synonymous with high information.

The above results are based on a simple model network consisting of recurrently connected McCullough-Pitts neurons. The network receives input from a population of neurons that code for an angular variable, denoted theta. The input consists of a noisy hill of activity centered around the true value of theta. The network connectivity has two components: a strong random one, and a weak structured one. We make the random component strong to be consistent with the observation that both excitatory and inhibitory input to a neuron is large; we make the structured component weak to prevent runaway excitation [2]. For the structured component we use Mexican hat connectivity, which matches, at least approximately, the form of the input, and thus has the potential to enhance information transmission.

We have shown previously that we can compute the correlational structure analytically [3], and therefore, we can compute what is called the linear Fisher information – a bound on the inverse of the variance of theta for a linear estimator [3]. Both the correlations and linear Fisher information depend on three parameters: W and J, which determine the overall strength of the background and structured connectivity, respectively, and I_in, the information in the input population, which scales with the height of the noisy hill of activity. In this model, the linear Fisher information and the mean covariance are functions of J, W and I_in. For all realistic values of the input information, I_in, we find the following: 1. The linear Fisher information, denoted I_out, increases monotonically with J until the network becomes unstable. 2. When J is small, I_out is a single-peaked function of W. The position of the peak decreases as J increases, and for J large enough, it disappears altogether; in this regime (large J) I_out is a decreasing function of W. These results imply that maximum information transmission occurs when the structured connectivity is strong and the random connectivity weak. How do the correlations behave? They increase with J and decrease with W. Thus, because optimum information transmission occurs at high J and low W, at least for this model maximum information transmission occurs when the correlations are largest.


1. Averbeck, Latham and Pouget. Nature Reviews Neuroscience, 7:358-366, 2006.

2. Roudi and Latham. PLoS Computational Biology, 3:1679-1700, 2007.

3. Barrett and Latham. Frontiers in Systems Neuroscience, doi:10.3389/conf.neuro.06.2009.03.123, 2009.

Conference: Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010.

Presentation Type: Poster Presentation

Topic: Poster session II

Citation: Barrett DG and Latham PE (2010). Fisher information in correlated networks. Front. Neurosci. Conference Abstract: Computational and Systems Neuroscience 2010. doi: 10.3389/conf.fnins.2010.03.00293

Received: 07 Mar 2010; Published Online: 07 Mar 2010.

* Correspondence: David G Barrett, UCL, Gatsby Computational Neuroscience Unit, London, United Kingdom, barrett@gatsby.ucl.ac.uk

© 2007 - 2018 Frontiers Media S.A. All Rights Reserved