Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability—spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows—a phenomenon that depends on “extensive chaos,” as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.
A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.
Recent experimental and theoretical studies have highlighted the importance of cell-to-cell differences in the dynamics and functions of neural networks, such as in different types of neural coding or synchronization. It is still not known, however, how neural heterogeneity can affect cortical computations, or impact the dynamics of typical cortical circuits constituted of sparse excitatory and inhibitory networks. In this work, we analytically and numerically study the dynamics of a typical cortical circuit with a certain level of neural heterogeneity. Our circuit includes realistic features found in real cortical populations, such as network sparseness, excitatory, and inhibitory subpopulations of neurons, and different cell-to-cell heterogeneities for each type of population in the system. We find highly differentiated roles for heterogeneity, depending on the subpopulation in which it is found. In particular, while heterogeneity among excitatory neurons non-linearly increases the mean firing rate and linearizes the f-I curves, heterogeneity among inhibitory neurons may decrease the network activity level and induces divisive gain effects in the f-I curves of the excitatory cells, providing an effective gain control mechanism to influence information flow. In addition, we compute the conditions for stability of the network activity, finding that the synchronization onset is robust to inhibitory heterogeneity, but it shifts to lower input levels for higher excitatory heterogeneity. Finally, we provide an extension of recently reported heterogeneity-induced mechanisms for signal detection under rate coding, and we explore the validity of our findings when multiple sources of heterogeneity are present. These results allow for a detailed characterization of the role of neural heterogeneity in asynchronous cortical networks.
Electrical signaling in neurons is mediated by the opening and closing of large numbers of individual ion channels. The ion channels' state transitions are stochastic and introduce fluctuations in the macroscopic current through ion channel populations. This creates an unavoidable source of intrinsic electrical noise for the neuron, leading to fluctuations in the membrane potential and spontaneous spikes. While this effect is well known, the impact of channel noise on single neuron dynamics remains poorly understood. Most results are based on numerical simulations. There is no agreement, even in theoretical studies, on which ion channel type is the dominant noise source, nor how inclusion of additional ion channel types affects voltage noise. Here we describe a framework to calculate voltage noise directly from an arbitrary set of ion channel models, and discuss how this can be use to estimate spontaneous spike rates.
Cortical neurons receive barrages of excitatory and inhibitory inputs which are not independent, as network structure and synaptic kinetics impose statistical correlations. Experiments in vitro and in vivo have demonstrated correlations between inhibitory and excitatory synaptic inputs in which inhibition lags behind excitation in cortical neurons. This delay arises in feed-forward inhibition (FFI) circuits and ensures that coincident excitation and inhibition do not preclude neuronal firing. Conversely, inhibition that is too delayed broadens neuronal integration times, thereby diminishing spike-time precision and increasing the firing frequency. This led us to hypothesize that the correlation between excitatory and inhibitory synaptic inputs modulates the encoding of information of neural spike trains. We tested this hypothesis by investigating the effect of such correlations on the information rate (IR) of spike trains using the Hodgkin-Huxley model in which both synaptic and membrane conductances are stochastic. We investigated two different synaptic input regimes: balanced synaptic conductances and balanced currents. Our results show that correlations arising from the synaptic kinetics, τ, and millisecond lags, δ, of inhibition relative to excitation strongly affect the IR of spike trains. In the regime of balanced synaptic currents, for short time lags (δ ~ 1 ms) there is an optimal τ that maximizes the IR of the postsynaptic spike train. Given the short time scales for monosynaptic inhibitory lags and synaptic decay kinetics reported in cortical neurons under physiological contexts, we propose that FFI in cortical circuits is poised to maximize the rate of information transfer between cortical neurons. Our results also provide a possible explanation for how certain drugs and genetic mutations affecting the synaptic kinetics can deteriorate information processing in the brain.