Skip to main content

HYPOTHESIS AND THEORY article

Front. Comput. Neurosci., 15 November 2013
Volume 7 - 2013 | https://doi.org/10.3389/fncom.2013.00162

Generalized activity equations for spiking neural network dynamics

  • 1Modeling, Analysis and Theory Team, Allen Institute for Brain Science, Seattle, WA, USA
  • 2Laboratory of Biological Modeling, NIDDK, National Institutes of Health, Bethesda, MD, USA

Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales—the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

Introduction

Even with the rapid increase in computing power due to Moore's law and proposals to simulate the entire human brain notwithstanding Ailamaki et al. (2012), a realistic simulation of a functioning human brain performing non-trivial tasks is remote. While it is plausible that a network the size of the human brain could be simulated in real time Izhikevich and Edelman (2008); Eliasmith et al. (2012) there are no systematic ways to explore the parameter space. Technology to experimentally determine all the parameters in a single brain simultaneously does not exist and any attempt to infer parameters by fitting to data would require exponentially more computing power than a single simulation. We also have no idea how much detail is required. Is it sufficient to simulate a large number of single compartment neurons or do we need multiple-compartments? How much molecular detail is required? Do we even know all the important biochemical and biophysical mechanisms? There are an exponential number of ways a simulation would not work and figuring out which remains computationally intractable. Hence, an alternative means to provide appropriate prior distributions for parameter values and model detail is desirable. Current theoretical explorations of the brain utilize either abstract mean field models or small numbers of more biophysical spiking models. The regime of large but finite numbers of spiking neurons remains largely unexplored. It is not fully known what role spike time correlations play in the brain. It would thus be very useful if mean field models could be augmented with some spike correlation information.

This paper outlines a scheme to derive generalized activity equations for the mean and correlation dynamics of a fully deterministic system of coupled spiking neurons. It synthesizes methods we have developed to solve two different types of problems. The first problem was how to compute finite system size effects in a network of coupled oscillators. We adapted the methods of the kinetic theory of gases and plasmas Ichimaru (1973); Nicholson (1993) to solve this problem. The method exploits the exchange symmetry of the oscillators and characterizes the phases of all the oscillators in terms of a phase density function η(θ, t), where each oscillator is represented as a point mass in this density. We then write down a formal flux conservation equation of this density, called the Klimontovich equation, which completely characterizes the system. However, because the density is not differentiable, the Klimontovich equation only exists in the weak or distributional sense. Previously, e.g., Desai and Zwanzig (1978); Strogatz and Mirollo (1991); Abbott and van Vreeswijk (1993); Treves (1993) the equations were made usable by taking the “mean field limit” of N → ∞ and assuming that the density is differentiable in that limit, resulting in what is called the Vlasov equation. Instead of immediately taking the mean field limit, we regularize the density by averaging over initial conditions and parameters and then expand in the inverse system size N−1 around the mean field limit. This results in a system of coupled moment equations known as the BBGKY moment hierarchy. In Hildebrand et al. (2007), we solved the moment equations for the Kuramoto model perturbatively to compute the pair correlation function between oscillators. However, the procedure was somewhat ad-hoc and complicated. We then subsequently showed in Buice and Chow (2007) that the BBGKY moment hierarchy could be recast in terms of a density functional of the phase density. This density functional could be written down explicitly as an integral over all possible phase histories, i.e., a Feynman-Kac path integral. The advantage of using this density functional formalism is that the moments to arbitrary order in 1/N could be computed as a steepest-descent expansion of the path integral, which can be expressed in terms of Feynman diagrams. This made the calculation more systematic and mechanical. We later applied the same formalism to synaptically coupled spiking models Buice and Chow (2013b).

Concurrently with this line of research, we also explored the question of how to generalize population activity equations, such as the Wilson-Cowan equations, to include the effects of correlations. The motivation for this question is that the Wilson-Cowan equations are mean field equations and do not capture the effects of spike-time correlations. For example, the gain in the Wilson-Cowan equations is fixed, (which is a valid approximation when the neurons fire asynchronously), but correlations in the firing times can change the gain Salinas and Sejnowski (2000). Thus, it would be useful to develop a systematic procedure to augment population activity equations to include spike correlation effects. The approach we took was to posit plausible microscopic stochastic dynamics, dubbed the spike model, that reduced to the Wilson-Cowan equations in the mean field limit and compute the self-consistent moment equations from that microscopic theory. Buice and Cowan (2009) showed that the solution of the master equation of the spike model could be expressed formally in terms of a path integral over all possible spiking histories. The random variable in the path integral is a spike count whereas in the path integral for the deterministic phase model we described above, the random variable is a phase density. To generate a system of moment equations for the microscopic stochastic system, we transformed the random spike count variable in the path integral into moment variables Buice et al. (2010). This is accomplished using the effective action approach of field theory, where the exponent of the cumulant generating functional, called the action, which is a function of the random variable is Legendre transformed into an effective action of the cumulants. The desired generalized Wilson-Cowan activity equations are then the equations of motion of the effective action. This is analogous to the transformation from Lagrangian variables of position and velocity to Hamiltonian variables of position and momentum. Here, we show how to apply the effective action approach to a deterministic system of synaptically coupled spiking neurons to derive a set of moment equations.

Approach

Consider a network of single compartment conductance-based neurons

   CdVidt=r=1ngr(xir)(Vivr)+j=1Ngijsj(t)
 τirdxirdt=f(Vi,xi)
   τjdsjdt=h(Vj,sj)
τgdgijdt=ϕ(gij,V)

The equations are remarkably stiff with time scales spanning orders of magnitude from milliseconds for ion channels, to seconds for adaptation, and from hours to years for changes in synaptic weights and connections. Parameter values must be assigned for 1011 neurons with 104 connections each. Here, we present a formalism to derive a set of reduced activity equations directly from a network of deterministic spiking neurons that capture the spike rate and spike correlation dynamics. The formalism first constructs a density functional for the firing dynamics of all the neurons in a network. It then systematically marginalizes the unwanted degrees of freedom to isolate a set of self-consistent equations for the desired quantities. For heuristic reasons, we derive an example set of generalized activity equations for the first and second cumulants of the firing dynamics of a simple spiking model but the method can be applied to any spiking model.

A convenient form to express spiking dynamics is with a phase oscillator. Consider the quadratic integrate-and-fire neuron

dVidt=Ii+Vi2+αiu(t)(1)

where I is an external current and u(t) are the synaptic currents with some weight αi. The spike is said to occur when V goes to infinity whereupon it is reset to minus infinity. The quadratic non-linearity ensures that this transit will occur in a finite amount of time. The substitution V = tan(θ/2) yields the theta model Ermentrout and Kopell (1986):

dθidt=1cosθi+(1+cosθi)(Ii+αiu)(2)

which is the normal form of a Type I neuron near the bifurcation to firing Ermentrout (1996). The phase neuron is an adequate approximation to spiking dynamics provided the inputs are not overly strong as to disturb the limit cycle. The phase neuron also includes realistic dynamics such as not firing when the input is below threshold. Coupled phase models arise naturally in weakly coupled neural networks Ermentrout and Kopell (1991); Hoppensteadt and Izhikevich (1997); Golomb and Hansel (2000). They include the Kuramoto model Kuramoto (1984), which we have previously analyzed Buice and Chow (2007); Hildebrand et al. (2007).

Here, we consider the phase dynamics of a set of N coupled phase neurons obeying

   θ˙i=F(θ,γi,u(t))(3)
u˙(t)=βu(t)+βν(t)(4)
ν(t)=1Nj=1Nlδ(ttjl)(5)

where each neuron has a phase θi that is indexed by i, u is a global synaptic drive, F(θ, γ, u) is the phase and synaptic drive dependent frequency, γi represents all the parameters for neuron i drawn from a distribution with density g(γ), ν is the population firing rate of the network,tlj is the lth firing time of neuron j and a neuron fires when its phase crosses π. In the present paper, we consider all-to-all or global coupling through a synaptic drive variable u(t). However, our basic approach is not restricted to global coupling.

We can encapsulate the phase information of all the neurons into a neuron density function Buice and Chow (2007, 2011, 2013a,b); Hildebrand et al. (2007).

η(θ,γ,t)=1Ni=1Nδ(θθi(t))δ(γγi)(6)

where δ(·) is the Dirac delta functional, and θi(t) is a solution to system (3–5). The neuron density gives a count of the number of neurons with phase θ and synaptic strength γ at time t. Using the fact that the Dirac delta functional in (5) can be expressed as lδ(ttjl)=θj.δ(πθj), the population firing rate can be rewritten as

ν(t)=dγF(π,γ,u(t))η(π,γ,t)(7)

The neuron density formally obeys the conservation equation

tη(θ,γ,t)+θ[Fη(θ,γ,t)]=0(8)

with initial condition η(θ, γ, t0) = η0(θ, γ) and u(t0) = u0. Equation (8) is known as the Klimontovich equation Ichimaru (1973); Liboff (2003). The Klimontovich equation, the equation for the synaptic drive (4), and the firing rate expressed in terms of the neuron density (7), fully define the system. The system is still fully deterministic but is now in a form where various sets of reduced descriptions can be derived. Here, we will produce an example of a set of reduced equations or generalized activity equations that capture some aspects of the spiking dynamics. The path we take toward the end will require the introduction of some formal machinery that may obscure the intuition around the approximations. However, we feel that it is useful because it provides a systematic and controlled way of generating averaged quantities that can be easily generalized.

For finite N, (8) is only valid in the weak or distributional sense since η is not differentiable. In the N → ∞ limit, it has been argued that η will approach a smooth density ρ that evolves according to the Vlasov equation that has the same form as (8) but with η replaced by ρ Ichimaru (1973); Desai and Zwanzig (1978); Strogatz and Mirollo (1991); Nicholson (1993); Hildebrand et al. (2007). This has been proved rigorously in the case where noise is added using the theory of coupled diffusions McKean Jr (1966); Faugeras et al. (2009); Baladron et al. (2012); Touboul (2012). This N → ∞ limit is called mean field theory. In mean field theory, the original microscopic many body neuronal network is represented by a smooth macroscopic density function. In other words, the ensemble of networks prepared with different microscopic initial conditions is sharply peaked at the mean field solution. For large but finite N, there will be deviations away from mean field Buice and Chow (2007); Hildebrand et al. (2007); Buice and Chow (2013a,b). These deviations can be characterized in terms of a distribution over an ensemble of coupled networks that are all prepared with different initial conditions and parameter values. Here, we show how a perturbation theory in N−1 can be developed to expand around the mean field solution. This requires the construction of the probability density functional over the ensemble of spiking neural networks. We adapt the tools of statistical field theory to perform such a construction.

Formalism

The complete description of the system given by equations (4, 7, 8) can be written as

yes

The probability density functional governing the system specified by the synaptic drive and Klimontovich equations (9) and (10) given initial conditions (η0, u0) can be written as

yes

where P[η, u0, u0] is the conditional probability density functional of the functions (η, u), and P00, u0] is the density functional over initial conditions of the system. The integral is a Feynman-Kac path integral over all allowed initial condition functions. Formally we can write P[η, u0, u0] as a point mass (Dirac delta) located at the solutions of (9) and (10) given the initial conditions:

yes

The probability density functional (11) is then

yes

Equation (12) can be made useful by noting that the Fourier representation of a Dirac delta is given by δ(x) ∝ ∫dk eikx. Using the infinite dimensional Fourier functional transform then gives

yes

The exponent S[η, u] in the probability density functional is called the action and has the form

S=Su+Sφ+S0(13)

where

Sφ=dθdγdtφ˜(x)[ tφ(x)+θF(θ,γ,u(t))φ(x) ](14)

represents the contribution of the transformed neuron density to the action,

Su=1Ndtu˜(t)(u˙(t)+βu(t)βdγF(π,γ,u(t))[φ˜(π,γ,t)+1]φ(π,γ,t))(15)

represents the global synaptic drive, S0[φ˜0(x0),u0(t0)] represents the initial conditions, and x = (θ, γ, t). For the case where the neurons are considered to be independent in the initial state, we have

S0[ φ˜0(x0),u0(t0) ]=1Nu˜(t0)u0ln(1+dθdγφ˜0(θ,γ,t0)ρ0(θ,γ,t0))(16)

where u0 is the initial value of the coupling variable and ρ0(θ, γ, t) is the distribution from which the initial configuration is drawn for each neuron. The action includes two imaginary auxiliary response fields (indicated with a tilde), which are the infinite dimensional Fourier transform variables. The factor of 1/N appears to ensure correct scaling between the u and φ variables since u applies to a single neuron while φ applies to the entire population. The full derivation is given in Buice and Chow (2013b) and a review of path integral methods applied to differential equations is given in Buice and Chow (2010). In the course of the derivation we have made a Doi-Peliti-Jannsen transformation Janssen and Täuber (2005); Buice and Chow (2013b), given by

φ(x)=η(x)eη˜(x)
φ˜(x)=eη˜(x)1

In deriving the action, we have explicitly chosen the Ito convention so that the auxiliary variables only depend on variables in the past. The action (13) contains all the information about the statistics of the network.

The moments for this distribution can be obtained by taking functional derivatives of a moment generating functional. Generally, the moment generating function for a random variable is given by the expectation value of the exponential of that variable with a single parameter. Because our goal is to transform to new variables for the first and second cumulants, we form a “two-field” moment generating functional, which includes a second parameter for pairs of random variables,

yes

where J and K are moment generating fields, ξ1(x) = u(t), ξ2(x) = ũ(t), ξ3(x) = φ(x), ξ4(x) = φ˜(x), and x = (θ, γ, t). Einstein summation convention is observed beween upper and lower indices. Unindexed variables represent vectors. The integration measure dx is assumed to be dt when involving indices 1 and 2. Covariances between an odd and even index corresponds to a covariance between a field and an auxiliary field. Based on the structure of the action S and (17) we see that this represents a linear propagator and by causality and the choice of the Ito convention is only non-zero if the time of the auxiliary field is evaluated at an earlier time than the field. Covariances between two even indices correspond to that between two auxiliary fields and are always zero because of the Ito convention.

The mean and covariances of ξ can be obtained by taking derivatives of the action W[J, K] in (17), with respect to J and K and setting J and K to zero:

  δWδJi=ξi|J,K=0
δWδKij=12ξiξj|J,K=0

Expressions for these moments can be computed by expanding the path integral in (17) perturbatively around some mean field solution. However, this can be unwieldy if closed form expressions for the mean field equations do not exist. Alternatively, the moments at any order can be expressed as self-consistent dynamical equations that can be analyzed or simulated numerically. Such equations form a set of generalized activity equations for the means ai = 〈 ξi〉, and covariances Cij = N[〈ξiξj〉 − aiaj].

We derive the generalized activity equations by Legendre transforming the action W, which is a function of J and K, to an effective action Γ that is a function of a and C. Just as a Fourier transform expresses a function in terms of its frequencies, a Legendre transform expresses a convex function in terms of its derivatives. This is appropriate for our case because the moments are derivatives of the action. The Legendre transform of W[J, K] is

Γ[a,C]=W[J,K]+dxJiai+12dxdx[ aiaj+1NCij ]Kij(18)

which must obey the constraints

  δWδJi=ai
δWδKij=12[ aiaj+1NCij ]

and

     δΓδaiΓi,00=Ji+12aj[ Kij+Kji ]
δΓδCijΓ0,ij=12NKij(19)

The generalized activity equations are given by the equations of motion of the effective action, in direct analogy to the Euler-Lagrange equations of classical mechanics, and are obtained by setting Ji = 0 and Kij = 0 in (19).

In essence, what the effective action does is to take a probabilistic (statistical mechanical) system in the variables ξ with action S and transform them to a deterministic (classical mechanical) system with an action Γ. Our approach here follows that used in Buice et al. (2010) to construct generalized activity equations for the Wilson Cowan model. However, there are major differences between that system and this one. In Buice et al. (2010), the microscopic equations were for the spike counts of an inherently probabilistic model so the effective action and ensuing generalized activity equations could be constructed directly from the Markovian spike count dynamics. Here, we start from deterministically firing individual neurons and get to a probabilistic description through the Klimontovich equation. It would be straightforward to include stochastic effects into the spiking dynamics.

Using (18) in (17) gives

yes

where J and K are constrainted by (19). We cannot compute the effective action explicitly but we can compute it perturbatively in N−1. We first perform a shift ξi = ai + ψi, expand the action as S[a + ψ] = S[a] + ∫dx(Li[ai + (1/2) ∫dxLij[a] ψi ψj) + … and substitute for J and K with the constraints (19) to obtain

yes

where

TrAijBij=dxdxAij(x,x)Bij(x,x)(22)

Our goal is to construct an expansion for Γ by collecting terms in successive orders of N−1 in the path integral of (21). Expanding Γ as Γ[a, C] = Γ0 + N−1Γ1 + N−2Γ2 and equating coefficients of N in (21) immediately leads to the conclusion that Γ0 = S[a], which gives

yes

where higher order terms in N−1 are not included. To lowest non-zero order Γ0, ij = N−1Γ0, ij1 since Γ0 is only a function of a and not C. If we set

Γ10,ij=(1/2)Lij(1/2)Qij,(23)

we obtain

yes

to order N−1. Qij is an unknown function of a and C, which we will deduce using self-consistency. The path integral in (24), which is an infinite dimensional Gaussian that can be explicitly integrated, is proportional to 1/detQij=exp((1/2)lndetQij)=exp((1/2)TrlnQij), using properties of matrices. Hence, (24) becomes

exp(NΓ[a,C])=exp(NS[a]12TrLijCij12TrQijCij+12TrlnQij)

and

Γ[a,C]=S[a]+12NTrLijCij+12NTrlnQij12NTrQijCij

Taking the derivative of Γ with respect to Cij yields

Γ0,ij=12N(Lij+(Q1)klCijQlkCij(QklClk))

Self consistency with (23) then requires that Qij = (C−1)ij which leads to the effective action

Γ[a,C]=S[a]+12NTrln(C1)ij+12NTrLijCij(25)

where

dx(C1)ik(x,x)Ckj(x,x0)=δijδ(xx0)

and we have dropped the irrelevant constant terms.

The equations of motion to order N−1 are obtained from (19) with Ji and Kij set to zero:

δS[a]δai+12NδδaiTrLijCij=0(26)
12N[(C1)ij+Lij]=0(27)

and (27) can be rewritten as

dxLik(x,x)Ckj(x,x0)=δijδ(xx0)(28)

Hence, given any network of spiking neurons, we can write down a set of generalized activity equations for the mean and covariance functions by (1) constructing a neuron density function, (2) writing down the conservation law (Klimontovich equation), (3) constructing the action and (4) using formulas (26) and (28). We could have constructed these equations directly by multiplying the Klimontovich and synaptic drive equations by various factors of u and η and recombining. However, as we saw in Buice et al. (2010) this is not a straightforward calculation. The effective action approach makes this much more systematic and mechanical.

Phase Model Example

We now present a simple example to demonstrate the concepts and approximations involved in our expansion. Our goal is not to analyze the system per se but only to demonstrate the application of our method in a heuristic setting. We begin with a simple non-leaky integrate-and-fire neuron model, which responds to a global coupling variable. This is a special case of the dynamics given above, with F given by

F[θ,γ,u]=I(t)+γu(29)

The action from (14) and (15) is

S[a]=dθdγdta4(x)[ ta3(x)+θ(I+γa1(t))a3(x) ]           +1Ndta2(t)( a˙1(t)+βa1(t)βdγ(I+γa1(t))                [ a4(π,γ,t)+1 ]a3(π,γ,t) )(30)

and we ignore initial conditions for now.

In order to construct the generalized activity equations we need to compute the first and second derivatives of the action Li and Lij. Taking the first derivative of (30) gives

L1[a](x,x)=δS[a(x)]δa1(t)=dθdγdtγa4(x)θa3(x)δ(tt)                        +1N [ dta2(t)ddtδ(tt)+βa2(t)                         a2(t)βdγγ[ a4(π,γ,t)+1 ]a3(π,γ,t) ]
L2[a](x,x)=δS[a(x)]δa2(t)=1N [ da1dt+βa1(t)β                         dγ(I+γa1(t))[ a4(π,γ,t)+1 ]a3(π,γ,t) ]
L3[a](x,x)=δS[a(x)]δa3(x)=dta4(θ,γ,t)tδ(tt)                        +dθa4(θ,γ,t)θ(I+γa1(t))δ(θθ)                        βNa2(t)(I+γa1(t))(a4(π,γ,t)+1)                        ×δ(πθ)
L4[a](x,x)=δS[a(x)]δa4(x)=ta3(x)+θ(I+γa1(t))a3(x)                       βNa2(t)(I+γa1(t))a3(π,γ,t)δ(πθ)(31)

The mean field equations are obtained by solving Li = 0 using (31). We immediately see that a2 = a4 = 0 are solutions, which leaves us with

a1.+βa1βdγ(I+γa1)a3(π,γ,t)=0(32)
                       ta3+(I+γa1)θa3=0(33)

The mean field equations should be compared to those of the spike response model Gerstner (1995, 2000). We can also solve (33) directly to obtain

a3(x,t)=ρ0(θt0tdt[ IΩ(t)+γa1(t) ],γ,Ω)

where ρ0 is the initial distribution. If the neurons are initially distributed uniformly in phase, then ρ0 = g(γ)/2π and the mean field equations reduce to

a1.(t)+βa1(t)β2π(I+γ¯a1(t))=0(34)

which has the form of the Wilson-Cowan equation, with (β/2π) (I + γa1) acting as a gain function. Hence, the Wilson-Cowan equation is a full description of the infinitely large system limit of a network of globally coupled simple phase oscillators in the asynchronous state. For all other initial conditions, the one-neuron conservation equation (called the Vlasov equation in kinetic theory) must be included in mean field theory.

To go beyond mean field theory we need to compute Lij(x, x′, x″) = δLi(x, x′)/δaj(x″):

L11[a]=0
L12[a]=1N [ ddt+ββ                  dγγ[ a4(π,γ,t)+1 ]a3(π,γ,t) ]δ(tt)
L13[a]=[γdθa4(x)δ(γγ)θδ(θθ)                 βNγa2(t)[ a4(π,γ,t)+1 ]δ(πθ) ]δ(tt)
L14[a]=[γθa3(x)βNγa2(t)a3(π,γ,t)δ(πθ) ]                 δ(tt)
L21[a]=1N[ ddt+ββdγγ[ a4(π,γ,t)+1 ]a3(π,γ,t) ]                 δ(tt)
L22[a]=0
L23[a]=βN(I+γa1(t))[ a4(π,γ,t))+1 ]δ(πθ)                 δ(tt)
L24[a]=βN(I+γa1(t))a3(π,γ,t)]δ(πθ)δ(tt)
L31[a]= [ dθa4(θ,γ,t)γθδ(θθ)βNa2(t)γ                  [ a4(π,γ,t)+1 ]δ(πθ) ]δ(tt)
L32[a]=βN(I+γa1(t))(a4(π,γ,t)+1)δ(πθ)                 δ(tt)
L33[a]=0
L34[a]= [ δ(θθ)tθ(I+γa1(t))δ(θθ)                  βNa2(t)(I+γa1(t))δ(πθ)δ(πθ) ]                 δ(γγ)δ(tt)
L41[a]=[ θγa3(x)βNa2(t)γa3(π,γ,t)δ(πθ) ]                 δ(tt)
L42[a]=βN(I+γa1(t))a3(π,γ,t)δ(πθ)δ(tt)
L43[a]=tδ(xx)+θ(I+γa1(t))δ(xx)                 βNa2(t)(I+γa1(t))δ(πθ)δ(πθ)                  δ(γγ)δ(tt)
L44[a]=0

The activity equations for the means to order N−1 are given by (26). The only non-zero contributions are given by L13 and L31 resulting in

L2+12Nδδa2dxdx(L13C13+L31C31)=0
L4+12Nδδa4dxdx(L13C13+L31C31)=0

since a2 = a4 = 0 and correlations involving response variables (even indices) will be zero for equal times. The full activity equations for the means are thus

a1.+βa1βdγ(I+γa1)a3(π,γ,t)βNdγγC(π,γ,t)=0    (35)
ta3+(I+γa1)θa3+1NγθC(θ,γ,t)=0(36)

where C(θ, γ, t) = C13(t;θ, γ, t) = C31(θ, γ, t; t).

We can now use the Lij in (28) to obtain activity equations for Cij. There will be sixteen coupled equations in total but the applicable non-zero ones are

[ ddt+ββdγγa3(π,γ,t) ]C11(t;t0)    βdγ(I+γa1)C31(π,γ,t;t0)    βdγ(I+γa1(t))a3(π,γ,t)C41(π,γ,t;t0)=0(37)
[ ddt+ββdγγa3(π,γ,t) ]C13(t;x0)    βdγ(I+γa1)C33(π,γ,t;x0)    βdγ(I+γa1(t))a3(π,γ,t)C43(π,γ,t;x0)=0 (38)
γθa3(x)C11(t;t0)+[ t+(I+γa1)θ ]C31(x;t0)    βN(I+γa1(t))a3(π,γ,t)δ(πθ)C21(t,t0)=0(39)
γθa3(x)C13(t;x0)+[ t+(I+γa1(t))θ ]C33(x,x0)   βN(I+γa1(t))a3(π,γ,t)δ(πθ)C23(t,x0)=0(40)

Adding (38) and (39) and taking the limit t0t and setting θ0 = θ, γ0 = γ gives

tC(θ,γ,t)+[ ββdγγa3(π,γ,t)+(I+γa1)θ ]   C(θ,γ,t)βdγ(I+γa1)C33(π,γ,t;x)   2β(I+γa1(t))a3(π,γ,t)δ(πθ)+γθa3(x)C11(t;t)=0

where we use the fact that C21(t, t′) = N and C43(x; x′) = δ(θ − θ′)δ(γ − γ′) in the limit of t′ approaching t from below and equal to zero when approaching from above. Adding (37) and (40) to themselves with t and t0 interchanged and taking the limit of t0 approaching t gives

[ ddt+2β2βdγγa3(π,γ,t)] ]C11(t;t)    2βdγ(I+γa1)C(π,γ,t)=0[ t+(I+γa1(t))θ ]C33(x;x)+2γ[θa3(x)]C(x)=0

because C41(x; t) = 0 and C23(t; x) = 0. Putting this all together, we get the generalized activity equations

da1dt+βa1(t)βdγ(I+γa1(t))a3(π,γ,t)βNdγγC(π,γ,t)=0(41)
ta3(θ,γ,t)+(I+γa1)θa3(θ,γ,t)+1NγθC(θ,γ,t)=0(42)
tC(θ,γ,t)+[ ββdγγa3(π,γ,t)+(I+γa1)θ ]   C(θ,γ,t)βdγ(I+γa1)C33(π,γ,t;θ,γ,t)  2β(I+γa1(t))a3(θ,γ,t)δ(πθ)+γθa3(θ,γ,t)C11(t;t)=0(43)
[ ddt+2β2βdγγa3(π,γ,t) ] ]C11(t;t)2βdγ(I+γa1)C(π,γ,t)=0(44)
[ t+(I+γa1(t))θ ]C33(θ,γ,t;θ,γ,t)+2γθa3(θ,γ,t)C(θ,γ,t)=0(45)

Initial conditions, which are specified in the action, are required for each of these equations. The derivation of these equations using classical means require careful consideration for each particular model. Our method provides a blanket mechanistic algorithm. We propose that these equations represent a new scheme for studying neural networks.

Equations 41–45 are the complete self-consistent generalized activity equations for the mean and correlations to order N−1. It is a system of partial differential equations in t and θ. These equations can be directly analyzed or numerically simulated. Although the equations seem complicated, one must bear in mind that they represent the dynamics of the system averaged over initial conditions and unknown parameters. Hence, the solution of this PDE system replaces multiple simulations of the original system. In previous work, we required over a million simulations of the original system to obtained adequate statistics Buice and Chow (2013b). There is also a possibility that simplifying approximations can be applied to such systems. The system has complete phase memory because the original system was fully deterministic. However, the inclusion of stochastic effects will shorten the memory and possibly simplify the dynamics. It will pose no problem to include such stochastic effects. In fact, the formalism is actually more suited for stochastic systems Buice et al. (2010).

Discussion

The main goal of this paper was to show how to systematically derive generalized activity equations for the ensemble averaged moments of a deterministically coupled network of spiking neurons. Our method utilizes a path integral formalism that makes the process algorithmic. The resulting equations could be derived using more conventional perturbative methods although possibly with more calculational difficulty as we found before Buice et al. (2010). For example, for the case of the stochastic spike model, Buice et al. (2010) presumed that the Wilson-Cowan activity variable was the rate of a Poisson process and derived a system of generalized activity equations that corresponded to deviations around Poisson firing. Bressloff (2010), on the other hand, assumed that the Wilson-Cowan activity variable was a mean density and used a system-size expansion to derive an alternative set of generalized activity equations for the spike model. The classical derivations of these two interpretations look quite different and the differences and similarities between them are not readily apparent. However, the connections between the two types of expansions are very transparent using the path integral formalism.

Here, we derived equations for the rate and covariances (first and second cumulants) of a deterministic synaptically coupled spiking network as a system size expansion to first order. However, our method is not restricted to these choices. What is particularly advantageous about the path integral formalism is that it is straightforward to generalize to include higher order cumulants, extend to higher orders in the inverse system size, or to expand in other small parameters such as the inverse of a slow time scale. The action fully specifies the system and all questions regarding the system can be addressed with it.

To give a concrete illustration of the method, we derived the self-consistent generalized activity equations for the rates and covariances to order N−1 for a simple phase model. The resulting equations consist of ordinary and partial differential equations. This is to be expected since the original system was fully deterministic and memory cannot be lost. Even mean field theory requires the solution of an advective partial differential equation. The properties of these and similar equations remain to be explored computationally and analytically. The system is possibly simpler near the asynchronous state, which is marginally stable in mean field theory like the Kuramoto model Strogatz and Mirollo (1991) and like the Kuramoto model, we conjecture that the finite size effects will stabilize the asynchronous state Buice and Chow (2007); Hildebrand et al. (2007). The addition of noise will also stabilize the asynchronous state. Near asynchrony could be exploited to generate simplified versions of the asynchronous state.

We considered a globally connected network, which allowed us to assume that networks for different parameter values and initial conditions converge toward a “typical” system in the large N limit. However, this property may not hold for more realistic networks. While the formalism describing the ensemble average will hold regardless of this assumption, the utility of the equations as descriptions of a particular network behavior may suffer. For example, heterogeneity in the connectivity (as opposed to the global connectivity we consider here) may threaten this assumption. This is the case with so called “chaotic random networks” Sompolinsky et al. (1988) in which there is a spin-glass transition owing to the variance of the connectivity crossing a critical threshold. This results in the loss of a “typical” system in the large N limit requiring an effective stochastic equation which incorporates the noise induced by the network heterogeneity. Whether the expansion we present here is useful without further consideration depends upon whether the network heterogeneity induces this sort of effect. This is an area for future work. A simpler issue arises when there are a small discrete number of “typical” systems (such as with bistable solutions to the continuity equation). In this case, there are noise induced transitions between states. While the formalism has a means of computing this transition Elgart and Kamenev (2004), we do not consider this case here.

An alternative means to incorporate heterogeneous connections is to consider a network of coupled systems. In such a network, a set of generalized activity equations, such as those derived here or simplified versions, would be derived for each local system, together with equations governing the covariances between the local systems. Correlation based learning dynamics could then be imposed on the connections between the local systems. Such a network could serve as a generalization of current rate based neural networks to include the effects of spike correlations with applications to both neuroscience and machine learning.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work was supported by the Intramural Research Program of the NIH, NIDDK.

References

Abbott, L., and van Vreeswijk, C. (1993). Asynchronous states in networks of pulse-coupled oscillators. Phys. Rev. E 48, 1483–1490. doi: 10.1103/PhysRevE.48.1483

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ailamaki, A., Alvandpour, A., Amunts, K., Andreoni, W., Ashburner, J., Axer, M., et al. (2012). The Human Brain Project. A Report to the European Commission. Lausanne: The HBP-PS Consortium.

Baladron, J., Fasoli, D., Faugeras, O., and Touboul, J. (2012). Mean Field description of and propagation of chaos in recurrent multipopulation networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons. J. Math. Neurosci. 2:10. doi: 10.1186/2190-8567-2-10

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bressloff, P. C. (2010). Stochastic neural field theory and the system-size expansion. SIAM J. Appl. Math. 70, 1488. doi: 10.1137/090756971

CrossRef Full Text

Buice, M., and Cowan, J. (2009). Statistical mechanics of the neocortex. Progr. Biophys. Mol. Biol. 99, 53–86. doi: 10.1016/j.pbiomolbio.2009.07.003

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buice, M. A., and Chow, C. C. (2007). Correlations, fluctuations, and stability of a finite-size network of coupled oscillators. Phys. Rev. E 76:031118. doi: 10.1103/PhysRevE.76.031118

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buice, M. A., and Chow, C. C. (2010). Path integral methods for stochastic differential equations. arXiv 1009.5966.

Pubmed Abstract | Pubmed Full Text

Buice, M. A., and Chow, C. C. (2011). Effective stochastic behavior in dynamical systems with incomplete information. Phys. Rev. E 84(5 Pt 1):051120. doi: 10.1103/PhysRevE.84.051120

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Buice, M. A., and Chow, C. C. (2013a). Beyond mean field theory: statistical field theory for neural networks. J. Stat. Mech. Theor. Exp. 2013:P03003. doi: 10.1088/1742-5468/2013/03/P03003

CrossRef Full Text

Buice, M. A., and Chow, C. C. (2013b). Dynamic finite size effects in spiking neural networks. PLoS Comput. Biol. 9:e1002872.

Pubmed Abstract | Pubmed Full Text

Buice, M. A., Cowan, J. D., and Chow, C. C. (2010). Systematic fluctuation expansion for neural network activity equations. Neural Comput. 22, 377–426. doi: 10.1162/neco.2009.02-09-960

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Desai, R., and Zwanzig, R. (1978). Statistical mechanics of a nonlinear stochastic model. J. Stat. Phys. 19, 1–24. doi: 10.1007/BF01020331

CrossRef Full Text

Elgart, V., and Kamenev, A. (2004). Rare events statistics in reaction-diffusion systems. Phys. Rev. E 70:041106. doi: 10.1103/PhysRevE.70.041106

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., et al. (2012). A large-scale model of the functioning brain. Science 338, 1202–1205. doi: 10.1126/science.1225266

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ermentrout, B. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 979–1001. doi: 10.1162/neco.1996.8.5.979

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ermentrout, G. B., and Kopell, N. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253. doi: 10.1137/0146017

CrossRef Full Text

Ermentrout, G., and Kopell, N. (1991). Multiple pulse interactions and averaging in systems of coupled neural oscillators. J. Math. Biol. 29, 195–217. doi: 10.1007/BF00160535

CrossRef Full Text

Faugeras, O., Touboul, J., and Cessac, B. (2009). A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front. Comput. Neurosci. 3:1. doi: 10.3389/neuro.10.001.2009

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gerstner, W. (1995). Time structure of the activity in neural network models. Phys. Rev. E 51, 738–758. doi: 10.1103/PhysRevE.51.738

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural Comput. 12, 43–89. doi: 10.1162/089976600300015899

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Golomb, D., and Hansel, D. (2000). The number of synaptic inputs and the synchrony of large, sparse neuronal networks. Neural Comput. 12, 1095–1139. doi: 10.1162/089976600300015529

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hildebrand, E. J., Buice, M. A., and Chow, C. C. (2007). Kinetic theory of coupled oscillators. Phys. Rev. Lett. 98:054101. doi: 10.1103/PhysRevLett.98.054101

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hoppensteadt, F., and Izhikevich, E. (1997). Weakly Connected Neural Networks. New York, NY: Springer-Verlag. doi: 10.1007/978-1-4612-1828-9

CrossRef Full Text

Ichimaru, S. (1973). Basic Principles of Plasma Physics, a Statistical Approach. New York, NY: Benjamin.

Izhikevich, E. M., and Edelman, G. M. (2008). Large-scale model of mammalian thalamocortical systems. Proc. Natl. Acad. Sci. U.S.A. 105, 3593–3598. doi: 10.1073/pnas.0712231105

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Janssen, H., and Täuber, U. (2005). The field theory approach to percolation processes. Ann. Phys. 315, 147–192. doi: 10.1016/j.aop.2004.09.011

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kuramoto, Y. (1984). Chemical Oscillations, Waves, and Turbulence. Berlin: Springer. doi: 10.1007/978-3-642-69689-3

CrossRef Full Text

Liboff, R. L. (2003). Kinetic Theory. New York, NY: Springer.

McKean, H. Jr. (1966). A class of Markov processes associated with nonlinear parabolic equations. Proc. Natl. Acad. Sci. U.S.A. 56, 1907. doi: 10.1073/pnas.56.6.1907

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Nicholson, D. R. (1993). Introduction to Plasma Theory. Malabar, FL: Krieger Publishing Co.

Salinas, E., and Sejnowski, T. J. (2000). Impact of correlated synaptic input on output firing rate and variability in simple neuronal models. J. Neurosci. 20, 6193–6209.

Pubmed Abstract | Pubmed Full Text

Sompolinsky, H., Crisanti, A., and Sommers, H. (1988). Chaos in random neural networks. Phys. Rev. Lett. 61, 259–262. doi: 10.1103/PhysRevLett.61.259

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Strogatz, S., and Mirollo, R. (1991). Stability of incoherence in a population of coupled oscillators. J. Stat. Phys. 63, 613–635. doi: 10.1007/BF01029202

CrossRef Full Text

Touboul, J. (2012). Mean-field equations for stochastic firing-rate neural fields with delays: Derivation and noise-induced transitions. Phys. D Nonlin. Phenom. 241, 1223–1244. doi: 10.1016/j.physd.2012.03.010

CrossRef Full Text

Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Netw. Comput. Neural Syst. 4, 259–284. doi: 10.1088/0954-898X/4/3/002

CrossRef Full Text

Keywords: mean field theory, theta model, fokker-planck, correlations, finite size networks, wilson-cowan model, population rate, fluctuations

Citation: Buice MA and Chow CC (2013) Generalized activity equations for spiking neural network dynamics. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162

Received: 29 April 2013; Accepted: 23 October 2013;
Published online: 15 November 2013.

Edited by:

Julie Wall, Queen Mary, University of London, UK

Reviewed by:

Brent Doiron, University of Pittsburgh, USA
G. Bard Ermentrout, University of Pittsburgh, USA

Copyright © 2013 Buice and Chow. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carson C. Chow, Laboratory of Biological Modeling, NIDDK, National Institutes of Health, Building 12A Room 4007, 12 South Drive, Bethesda, MD 20814, USA e-mail: carsonc@mail.nih.gov

Download