Impact Factor 2.073

The world's most-cited Neurosciences journals

Original Research ARTICLE

Front. Comput. Neurosci., 18 February 2009 | https://doi.org/10.3389/neuro.10.001.2009

A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs

1
NeuroMathComp Laboratory, INRIA/ENS, France
2
Laboratoire Jean-Alexandre Dieudonné, Nice, France
3
Université de Nice, Nice, France
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995) : their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.

Introduction

Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity (“microscopic” scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50 μm to 1 mm, containing of the order of 100–100000 neurons belonging to a few different species. These columns have specific functions. For example, in the visual cortex V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the collective dynamics of many interacting neurons and resulting in a mesoscopic signal. The description of this collective dynamics requires models which are different from individual neurons models. In particular, if the accurate description of one neuron requires “m” parameters (such as sodium, potassium, calcium conductances, membrane capacitance, etc…), it is not necessarily true that an accurate mesoscopic description of an assembly of N neurons requires Nm parameters. Indeed, when N is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous. This is in effect for instance the case of cortical columns.
However, obtaining the evolution equations of the effective mean-field from microscopic dynamics is far from being evident. In simple physical models this can be achieved via the law of large numbers and the central limit theorem, provided that time correlations decrease sufficiently fast. This type of approach has been generalized to such fields as quantum field theory or non equilibrium statistical mechanics. To the best of our knowledge, the idea of applying mean-field methods to neural networks dates back to Amari (Amari, 1972 ; Amari et al., 1977 ). In his approach, the author uses an assumption that he called the “local chaos hypothesis”, reminiscent of Boltzmann’s “molecular chaos hypothesis”, that postulates the vanishing of individual correlations between neurons, when the number N of neurons tends to infinity. Later on, Sompolinsky et al. (1998) used a dynamic mean-field approach to conjecture the existence of chaos in an homogeneous neural network with random independent synaptic weights. This approach was formerly developed by Sompolinsky and colleagues for spin-glasses (Crisanti and Sompolinsky, 1987a ,b ; Sompolinsky and Zippelius, 1982 ), where complex effects such as aging or coexistence of a diverging number of metastable states, renders the mean-field analysis delicate in the long time limit (Houghton et al., 1983 ).
On the opposite, these effects do not appear in the neural network considered in Sompolinsky et al. (1998) because the synaptic weights are independent (Cessac, 1995 ) (and especially non symmetric, in opposition to spin-glasses). In this case, the Amari approach and the dynamic mean-field approach lead to the same mean-field equations. Later on, the mean-field equations derived by Sompolinsky and Zippelius (1982) for spin-glasses were rigorously obtained by Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995 , 1997 ; Guionnet, 1997 ). The application of their method to a discrete time version of the neural network considered in Sompolinsky et al. (1998) and in Molgedey et al. (1992) was done by Moynot and Samuelides (2002) .
Mean-field methods are often used in the neural network community but there are only a few rigorous results using the dynamic mean-field method. The main advantage of dynamic mean-field techniques is that they allow one to consider neural networks where synaptic weights are random (and independent). The mean-field approach allows one to state general and generic results about the dynamics as a function of the statistical parameters controlling the probability distribution of the synaptic weights (Samuelides and Cessac, 2007 ). It does not only provide the evolution of the mean activity of the network but, because it is an equation on the law of the mean-field, it also provides information on the fluctuations around the mean and their correlations. These correlations are of crucial importance as revealed in the paper by Sompolinsky et al. (1998) . Indeed, in their work, the analysis of correlations allows them to discriminate between two distinct regimes: a dynamics with a stable fixed point and a chaotic dynamics, while the mean is identically 0 in the two regimes.
However, this approach has also several drawbacks explaining why it is so seldom used. First, this method uses a generating function approach that requires heavy computations and some “art” for obtaining the mean-field equations. Second, it is hard to generalize to models including several populations. Finally, dynamic mean-field equations are usually supposed to characterize in fine a stationary process. It is then natural to search for stationary solutions. This considerably simplifies the dynamic mean-field equations by reducing them to a set of differential equations (see Section “Numerical Experiments”) but the price to pay is the unavoidable occurrence in the equations of a non free parameter, the initial condition, that can only be characterized through the investigation of the nonstationary case.
Hence it is not clear whether such a stationary solution exists, and, if it is the case, how to characterize it. To the best of our knowledge, this difficult question has only been investigated for neural networks in one paper by Crisanti et al. (1990) .
Different alternative approaches have been used to get a mean-field description of a given neural network and to find its solutions. In the neuroscience community, a static mean-field study of multi-population network activity was developed by Treves (1993) . This author did not consider external inputs but incorporated dynamical synaptic currents and adaptation effects. His analysis was completed in Abbott and Van Vreeswijk (1993) , where the authors considered a unique population of nonlinear oscillators subject to a noisy input current. They proved, using a stationary Fokker–Planck formalism, the stability of an asynchronous state in the network. Later on, Gerstner (1995) built a new approach to characterize the mean-field dynamics for the Spike Response Model, via the introduction of suitable kernels propagating the collective activity of a neural population in time.
Brunel and Hakim (1999) considered a network composed of integrate-and-fire neurons connected with constant synaptic weights. In the case of sparse connectivity, stationarity, and considering a regime where individual neurons emit spikes at low rate, they were able to study analytically the dynamics of the network and to show that the network exhibited a sharp transition between a stationary regime and a regime of fast collective oscillations weakly synchronized. Their approach was based on a perturbative analysis of the Fokker–Planck equation. A similar formalism was used in Mattia and Del Giudice (2002) which, when complemented with self-consistency equations, resulted in the dynamical description of the mean-field equations of the network, and was extended to a multi-population network.
Finally, Chizhov and Graham (2007) have recently proposed a new method based on a population density approach allowing to characterize the mesoscopic behavior of neuron populations in conductance-based models. We shortly discuss their approach and compare it to ours in Section “Discussion”.
In the present paper, we investigate the problem of deriving the equations of evolution of neural masses at mesoscopic scales from neurons dynamics, using a new and rigorous approach based on stochastic analysis.
The article is organized as follows. In Section “Mean-Field Equations for Multi-Populations Neural Network Models” we derive from first principles the equations relating the membrane potential of each of a set of neurons as function of the external injected current and noise and of the shapes and intensities of the postsynaptic potentials in the case where these shapes depend only on the postsynaptic neuron (the so-called voltage-based model). Assuming that the shapes of the postsynaptic potentials can be described by linear (possibly time-dependent) differential equations we express the dynamics of the neurons as a set of stochastic differential equations. Assuming that the synaptic connectivities between neurons satisfy statistical relationship only depending on the population they belong to, we obtain the mean-field equations summarizing the interactions of the P populations in the limit where the number of neurons tend to infinity. These equations can be derived in several ways, either heuristically following the lines of Amari (Amari, 1972 ; Amari et al., 1977 ), Sompolinsky (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ), and Cessac (Cessac, 1995 ; Samuelides and Cessac, 2007 ), or rigorously as in the work of Ben-Arous and Guionnet ( Ben-Arous and Guionnet, 1995 , 1997 ; Guionnet, 1997 ). The purpose of this article is not the derivation of these mean-field equations but to prove that they are well-posed and to provide an algorithm for computing their solution. Before we do this we provide the reader with two important examples of such mean-field equations. The first example is what we call the simple model, a straightforward generalization of the case studied by Amari and Sompolinsky. The second example is a neuronal assembly model, or neural mass model, as introduced by Freeman (1975) and exemplified in Jansen and Rit’s (1995) cortical column model.
In Section “Existence and Uniqueness of Solutions in Finite Time” we consider the problem of solutions over a finite time interval [t0, T]. We prove, under some mild assumptions, the existence and uniqueness of a solution of the dynamic mean-field equations given an initial condition at time t0. The proof consists in showing that a nonlinear equation defined on the set of multidimensional Gaussian random processes defined on [t0, T] has a fixed point. We extend this proof in Section “Existence and Uniqueness of Stationary Solutions” to the case of stationary solutions over the time interval [−∞, T] for the simple model. Both proofs are constructive and provide an algorithm for computing numerically the solutions of the mean-field equations.
We then study in Section “Numerical Experiments” the complexity and the convergence rate of this algorithm and put it to good use: We first compare our numerical results to the theoretical results of Sompolinsky and colleagues (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ). We then provide an example of numerical experiments in the case of two populations of neurons where the role of the mean-field fluctuations is emphasized.
Along the paper we introduce several constants. To help the reader we have collected in Table 1 of Appendix D, the most important ones and the place where they are defined in the text.

Mean-Field Equations for Multi-Populations Neural Network Models

In this section we introduce the classical neural mass models and compute the related mean-field equations they satisfy in the limit of an infinite number of neurons.

The General Model

General framework

We consider a network composed of N neurons indexed by i ∈ {1,…,N} belonging to P populations indexed by α ∈ {1,…,P} such as those shown in Figure 1 . Let Nα be the number of neurons in population α. We have yes. We define the population which the neuron i, i = 1,…,N belongs to.
Figure 1. General network considered: N neurons belonging to P populations are interconnected with random synaptic weights whose probability distributions only depend upon the population indexes, see text.
Definition 1. The function p: {1,…,N} → {1,…,P} associates to each neuron i ∈ {1,…,N}, the population α = p(i) ∈ {1,…,P}, it belongs to.
We consider that each neuron i is described by its membrane potential Vi(t), and the related instantaneous firing rate is deduced from it through a relation of the form νi(t) = Si(Vi(t)) (Dayan and Abbott, 2001 ; Gerstner and Kistler, 2002 ), where Si is a sigmoidal function.
A single action potential from neuron j generates a postsynaptic potential PSPij(u) on the postsynaptic neuron i, where u is the time elapsed after the spike is received. We neglect the delays due to the distance traveled down the axon by the spikes.
Assuming that the postsynaptic potentials sum linearly, the average membrane potential of neuron i is
yes
where the sum is taken over the arrival times of the spikes produced by the neurons j after some reference time t0. The number of spikes arriving between t and t + dt is νj(t)dt. Therefore we have
yes
or, equivalently
yes
The PSPijs can depend on several variables in order to account for instance for adaptation or learning.
We now make the simplifying assumption that the shape of the postsynaptic potential PSPij only depends on the postsynaptic population, which corresponds to the voltage-based models in Ermentrout’s (1998) classification.
The voltage-based model. The assumption, made in Hopfield (1984) , is that the postsynaptic potential has the same shape no matter which presynaptic population caused it, the sign and amplitude may vary though. This leads to the relation
yes
gi represents the unweighted shape (called a g-shape) of the postsynaptic potentials and yes is the strength of the postsynaptic potentials elicited by neuron j on neuron i. At this stage of the discussion, these weights are supposed to be deterministic. This is reflected in the notation yes which indicates an average value 1 . From Eq. 1 we have
yes
So far we have only considered the synaptic inputs to the neurons. We enrich our model by assuming that the neuron i receives also an external current density composed of a deterministic part, noted Ii(t), and a stochastic part, noted ni(t), so that
yes
We assume, and this is essential for deriving the mean-field equations below, that all indexed quantities depend only upon the P populations of neurons (see Definition 1), i.e.,
yes
where x ∼ y indicates that the two random variables x and y have the same probability distribution. In other words, all neurons in the same population are described by identical equations (in law).
The g-shapes describe the shape of the postsynaptic potentials and can reasonably well be approximated by smooth functions.
In detail we assume that gα, α = 1,…,P is the Green function of a linear differential equation of order k, i.e., satisfies
yes
where δ(t) is the Dirac delta function.
The functions b(t), l = 0,…,k, α = 1,…,P, are assumed to be continuous. We also assume for simplicity that
yes
for all t ∈ yes We note yes the corresponding differential operator:
yes
Applying yes to both sides of Eq. 3, using Eq. 7 and the fact that νj(s) = Sj(Vj(s)), we obtain a kth-order differential equation for Vi
yes
With a slight abuse of notation, we split the sum with respect to j into P sums:
yes
We classically turn the kth-order differential Eq. 8 into a k-dimensional system of coupled first-order differential equations (we divided both sides of the last equation by ci, see Eq. 6):
yes
A well-known example of g-shapes, see Section “Example II: The model of Jansen and Rit” below or Gerstner and Kistler (2002) , is
yes
where Y(t) is the Heaviside function. This is an exponentially decaying postsynaptic potential corresponding to
yes
in Eq. 5.
Another well-known example is
yes
This is a somewhat smoother function corresponding to
yes
in Eq. 5.
The dynamics. We modify the Eq. 9 by perturbing the first k − 1 equations with Brownian noise and assuming that ni(t) is white noise. This has the effect that the quantities that appear in Eq. 9 are not anymore the derivatives up to order k − 1 of Vi. This becomes true again only in the limit where the added Brownian noise is null. This may seem artificial at first glance but (1) it is a technical assumption that is necessary in the proofs of the well-posedness of the mean-field equations, see Assumption 1 below, and (2) it generates a rich class of external stochastic input, as shown below. With this in mind, the Eq. 9 now read
yes
yes
Wli(t), l = 0,…,k − 1, i = 1,…,N, are kN independent standard Brownian processes. Because we want the neurons in the same class to be essentially identical we also assume that the functions fli(t) that control the amount of noise on each derivative satisfy
fli(t) = flp(i)(t), l = 0,…,k − 1, i = 1,…,N
Note that in the limit f(t) = 0 for l = 0,…,k − 1 and α = 1,…,P, the components Vli(t) of the vector yes are the derivatives of the membrane potential Vi, for l = 0,…,k − 1 and the Eq. 12 turn into Eq. 9. The system of differential Eq. 12 implies that the class of admissible external stochastic input ni(t) to the neuron i are Brownian noise integrated through the filter of the synapse, i.e., involving the lth primitives of the Brownian motion for l ≤ k.
We now introduce the k − 1 N-dimensional vectors Vl(t) = [Vl1,…,VlN]T, l = 1,…,k − 1 of the lth-order derivative (in the limit of flp(i)(t) = 0) of V(t), and concatenate them with V(t) into the Nk-dimensional vector
yes
The N-neurons network is described by the Nk-dimensional vector yes. By definition the lth N-dimensional component yes of yes is equal to Vl. In the limit f(t) = 0 we have
yes
We next write the equations governing the time variation of the k N-dimensional sub-vectors of , i.e., the derivatives of order 0,…,k − 1 of yes. These are vector versions of Eq. 12. We write
yes
Fl(t) is the N × N diagonal matrix
diag yes
where flα(t), α = 1,…,P is repeated Nα times, and the Wl(t), l = 0,…,k − 2, are k − 1 N-dimensional independent standard Brownian processes.
The equation governing the (k − 1)th differential of the membrane potential has a linear part determined by the differential operators yes, α = 1,…,P and accounts for the external inputs (deterministic and stochastic) and the activity of the neighbors. We note yes the N × Nk matrix describing the relation between the neurons membrane potentials and their derivatives up to the order k − 1 and the (k − 1)th derivative of V. This matrix is defined as the concatenation of the k N × N diagonal matrixes
Bl(t) = diagyes
for l = 0,…,k − 1:
yes
We have:
yes
where Wk−1(t) is an N-dimensional standard Brownian process independent of Wl(t), l = 0,…,k − 2. The coordinates of the N-dimensional vector I(t) are the external deterministic input currents,
yes
yesthe N × N matrix of the weights yes which are equal to yes (see Eq. 4), and S is a mapping from yesN to yesN such that
yes
We define
yes
where IdN is the N × N identity matrix and 0N × N the N × N null matrix. We also define the two kN-dimensional vectors:
yes
where 0N is the N-dimensional null vector.
Combining Eqs. 14 and 15 the full equation satisfied by yes can be written:
yes
where the kN × kN matrix F(t) is equal to diag(F0,…,Fk−1) and Wt is an kN-dimensional standard Brownian process.

The Mean-Field Equations

One of the central goals of this paper is to analyze what happens when we let the total number N of neurons grow to infinity. Can we “summarize” the kN equations (Eq. 17) with a smaller number of equations that would account for the populations activity? We show that the answer to this question is yes and that the populations activity can indeed be represented by P stochastic differential equations of order k. Despite the fact that their solutions are Gaussian processes, these equations turn out to be quite complicated because these processes are non-Markovian.
We assume that the proportions of neurons in each population are nontrivial, i.e.:
yes
If it were not the case the corresponding population would not affect the global behavior of the system, would not contribute to the mean-field equation, and could be neglected.

General derivation of the mean-field equation

When investigating the structure of such mesoscopic neural assemblies as cortical columns, experimentalists are able to provide the average value yes of the synaptic efficacy Jij of neural population j to population i. These values are obviously subject to some uncertainty which can be modeled as Gaussian random variables. We also impose that the distribution of the Jijs depends only on the population pair α = p(i), β = p(j), and on the total number of neurons Nβ of population β:
yes
We also make the additional assumption that the Jij’s are independent. This is a reasonable assumption as far as modeling cortical columns from experimental data is concerned. Indeed, it is already difficult for experimentalists to provide the average value of the synaptic strength yes from population β to population α and to estimate the corresponding error bars (σαβ), but measuring synaptic efficacies correlations in a large assembly of neurons seems currently out of reach. Though, it is known that synaptic weights are indeed correlated (e.g., via synaptic plasticity mechanisms), these correlations are built by dynamics via a complex interwoven evolution between neurons and synapses dynamics and postulating the form of synaptic weights correlations requires, on theoretical grounds, a detailed investigation of the whole history of neurons–synapses dynamics.
Let us now discuss the scaling form of the probability distribution (Eq. 18) of the Jij’s, namely the division by Nβ for the mean and variance of the Gaussian distribution. This scaling ensures that the “local interaction field” yes summarizing the effects of the neurons in population β on neuron i, has a mean and variance which do not depend on Nβ and is only controlled by the phenomenological parameters yes, σαβ.
We are interested in the limit law when N → ∞ of the N-dimensional vector V defined in Eq. 3 under the joint law of the connectivities and the Brownian motions, which we call the mean-field limit. This law can be described by a set of P equations, the mean-field equations. As mentioned in the introduction these equations can be derived in several ways, either heuristically as in the work of Amari (Amari, 1972 ; Amari et al., 1977 ), Sompolinsky (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ), and Cessac (Cessac, 1995 ; Samuelides and Cessac, 2007 ), or rigorously as in the work of Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995 , 1997 ; Guionnet, 1997 ) . We derive them here in a pedestrian way, prove that they are well-posed, and provide an algorithm for computing their solution.
The effective description of the network population by population is possible because the neurons in each population are interchangeable, i.e., have the same probability distribution under the joint law of the multidimensional Brownian motion and the connectivity weights. This is the case because of the relations (Eqs. 4 and 16) which imply the form of Eq. 17.
The mean ideas of dynamic mean-field equations. Before diving into the mathematical developments let us comment briefly what are the basic ideas and conclusions of the mean-field approach. Following Eq. 8, the evolution of the membrane potential of some neuron i in population α is given by:
yes
Using the assumption that Si, Ii, ni depend only on neuron population, this gives:
yes
where we have introduced the local interaction field ηiβ(V(t)) = yes, summarizing the effects of neurons in population β on neuron i and whose probability distribution only depends on the pre- and postsynaptic populations α and β.
In the simplest situation where the Jij’s have no fluctuations (σαβ = 0) this field reads yes. The term Φβ(V(t)) = yes is the frequency rate of neurons in population β, averaged over this population. Introducing in the same way the average membrane potential in population β, yes, one obtains:
yes
This equation resembles very much Eq. 19 if one makes the following reasoning: “Since Φβ(V (t) is the frequency rate of neurons in population β, averaged over this population, and since, for one neuron, the frequency rate is νi(t) = Si(Vi(t)) let us write Φβ(V(t)) = Sβ(Vβ(t))”. This leads to:
yes
which has exactly the same form as Eq. 19 but at the level of a neuron population. Equations such as (22), which are obtained via a very strong assumption:
yes
are typically those obtained by Jansen and Rit (1995) . Surprisingly, they are correct and can be rigorously derived, as discussed below, provided σαβ = 0.
However, they cannot remain true, as soon as the synaptic weights fluctuate. Indeed, the transition from Eqs. 19 to 22 corresponds to a projection from a NP-dimensional space to a P-dimensional one, which holds because the NP × NP dimensional synaptic weights matrix has in fact only P linearly independent rows. This does not hold anymore if the Jij’s are random and the synaptic weights matrix has generically full rank. Moreover, the effects of the nonlinear dynamics on the synaptic weights variations about their mean, is not small even if the σαβs are and the real trajectories of Eq. 19 can depart strongly from the trajectories of Eq. 22. This is the main message of this paper.
To finish this qualitative description, let us say in a few words what happens to the mean-field equations when σαβ ≠ 0. We show below that the local interaction fields ηαβ(V (t)) becomes, in the limit Nβ → ∞, a time-dependent Gaussian field Uαβ(t). One of the main results is that this field is non-Markovian, i.e., it integrates the whole history, via the synaptic responses g which are convolution products. Despite the fact that the evolution equation for the membrane potential averaged over a population writes in a very simple form:
yes
it hides a real difficulty, since Uαβ(t) depends on the whole past. Therefore, the introduction of synaptic weights variability leads to a drastic change in neural mass models, as we now develop.
The Mean-Field equations. We note yes (respectively yes the set of continuous functions from the real interval [t0, T] (respectively yes. By assigning a probability to subsets of such functions, a continuous stochastic process X defines a positive measure of unit mass on yes (respectively yes. This set of positive measures of unit mass is noted yes (respectively yes.
We now define a process of particular importance for describing the limit process: the effective interaction process.
Definition 2. (Effective interaction process). Let X ∈ yes (respectively yes be a given Gaussian stochastic process. The effective interaction term is the Gaussian process UX ∈ yes, (respectively yes) defined by:
yes
where
yes
and
yes
In order to construct the solution of the mean-field equations (see Section “Existence and Uniqueness of Solutions in Finite Time”) we will need more explicit expressions for yes and yes which we obtain in the next proposition.
Proposition 1. Let yes be the mean of the process X and yes be its covariance matrix. The vectors mX(t) and ΔX(t, s) that appear in the definition of the effective interaction process UX are defined by the following expressions:
yes
and
yes
where
yes
is the probability density of a 0-mean, unit variance, Gaussian variable.
Proof. The results follow immediately by a change of variable from the fact that Xβ(t) is a univariate Gaussian random variable of mean μβ(t) and variance Cββ(t, t) and the pair (Xβ(t), Xβ(s)) is bivariate Gaussian random variable with mean (μβ(t), μβ(s)) and covariance matrix
yes yes
Choose P neurons i1,…,iP, one in each population (neuron iα belongs to the population α). We define the kP-dimensional vector yes by choosing, in each of the k N-dimensional components yes, of the vector yes defined in Eq. 13 the coordinates of indexes i1,…,iP. Then it can be shown, using either a heuristic argument or large deviations techniques (see Appendix A), that the sequence of kP-dimensional processes yes converges in law to the process yes solution of the following mean-field equation:
yes
L is the Pk × Pk matrix
yes
The P × P matrixes Bl(t), l = 0,…,k − 1 are, with a slight abuse of notations, equal to diag(bl1(t),…,blP(t)). yes is a kP-dimensional standard Brownian process. yes has the law of the P-dimensional effective interaction vector associated to the vector yes (first P-dimensional component of yes) and is statistically independent of the external noise yes and of the initial condition yes (when t0 > −∞):
yes
We have used for the matrixes Fl(t), l = 0,…,k − 1 the same abuse of notations as for the matrixes Bl(t), i.e., Fl(t) = diag(fl1(t),…,flP(t)) for l = 0,…,k − 1. I(t) is the P-dimensional external current [I1(t),…,IP(t)]T.
The process yes is a P × P-dimensional process and is applied, as a matrix, to the P-dimensional vector 1 with all coordinates equal to 1, resulting in the P-dimensional vector yes whose mean and covariance function can be readily obtained from Definition 2:
yes
and
yes
We have of course
yes
Equations (28) are formally very similar to Eq. 17 but there are some very important differences. The first ones are of dimension kP whereas the second are of dimension kN which grows arbitrarily large when N → ∞. The interaction term of the second, yes is simply the synaptic weight matrix applied to the activities of the N neurons at time t. The interaction term of the first equation, yes, though innocuous looking, is in fact quite complex (see Eqs. 29 and 30). In fact the stochastic process yes, putative solution of Eq. 28, is in general non-Markovian.
To proceed further we formally integrate the equation using the flow, or resolvent, of the Eq. 28, noted ΦL(t, t0) (see Appendix B), and we obtain, since we assumed L continuous, an implicit representation of yes:
yes
We now introduce for future reference a simpler model which is quite frequently used in the description on neural networks and has been formally analyzed by Sompolinsky and colleagues (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ) in the case of one population (P = 1).

Example I: The Simple Model

In the Simple Model, each neuron membrane potential decreases exponentially to its rest value if it receives no input, with a time constant τα depending only on the population. In other words, we assume that the g-shape describing the shape of the PSPs is Eq. 10, with K = 1 for simplicity. The noise is modeled by an independent Brownian process per neuron whose standard deviation is the same for all neurons belonging to a given population.
Hence the dynamics of a given neuron i from population α of the network reads:
yes
This is a special case of Eq. 12 where k = 1, b(t) = 1/τα, b(t) = 1 for α = 1,…,P. The corresponding mean-field equation reads:
yes
where the processes (Wα(t))t ≥ t0 are independent standard Brownian motions, yesis the effective interaction term, see Definition 2. This is a special case of Eq. 28 with L = diag(yes), and F = diag(f1,…,fP).
Taking the expected value of both sides of Eq. 33 and using we obtain Eq. 26 that the mean μα(t) of yesα(t) satisfies the differential equation
yes
If Cββ(t, t) vanishes for all t ≥ t0 this equation reduces to:
yes
which is precisely the “naive” mean-field equation (Eq. 22) obtained with the assumption (Eq. 23). We see that Eq. 22 are indeed correct, provided that Cββ(t, t) = 0, ∀t ≥ t0.
Equation 33 can be formally integrated implicitly and we obtain the following integral representation of the process yesα(t):
yes
where t0 is the initial time. It is an implicit equation on the probability distribution of yes(t), a special case of (Eq. 31), with yes
The variance Cαα(t, t) of yesα(t) can easily be obtained from Eq. 34. It reads
yes
where Δβ(u, v) is given by Eq. 27.
If σαβ = 0 and if sα = 0 then Cαα(t, t) = 0, ∀t ≥ t0 is a solution of this equation. Thus, mean-field equations for the simple model reduce to the naive mean-field Eq. 22 in this case. This conclusion extends as well to all models of synaptic responses, ruled by Eq. 5.
However, the equation of Cαα(t, t) shows that, in the general case, in order to solve the differential equation for μα(t), we need to know the whole past of the process yes. This exemplifies a previous statement on the non-Markovian nature of the solution of the mean-field equations.

Example II: The model of Jansen and Rit

One of the motivations of this study is to characterize the global behavior of an assembly of neurons in particular to get a better understanding of recordings of cortical signals like EEG or MEG. One of the classical models of neural masses is Jansen and Rit’s mass model (Jansen and Rit, 1995 ), in short the JR model (see Figure 2 ).
Figure 2. (A) Neural mass model: a population of pyramidal cells interacts with itself in an excitatory mode and with an inhibitory population of inter-neurons. (B) Block representation of the model. The g boxes account for the synaptic integration between neuronal populations. S boxes simulate cell bodies of neurons by transforming the membrane potential of a population into an output firing rate. The coefficients Jαβ are the random synaptic efficiency of population β on population α (1 represents the pyramidal population, 2 the excitatory feedback, and 3 the inhibitory inter-neurons).
The model features a population of pyramidal neurons that receives inhibitory inputs from local inter-neurons, excitatory feedbacks, and excitatory inputs from neighboring cortical units and sub-cortical structures such as the thalamus. The excitatory input is represented by an external firing rate that has a deterministic part I1(t) accounting for specific activity of other cortical units and a stochastic part n1(t) accounting for a non specific background activity. We formally consider that the excitatory feedback of the pyramidal neurons is a new neural population, making the number P of populations equal to 3. We also represent the external inputs to the other two populations by the sum of a deterministic part Ij(t) and a stochastic part nj(t), j = 2, 3, see Figure 2 .
In the model introduced originally by Jansen and Rit, the connectivity weights were assumed to be constant, i.e., equal to their mean value. Nevertheless, there exists a variability of these coefficients, and as we show in the sequel, the effect of the connectivity variability impacts the solution at the level of the neural mass. Statistical properties of the connectivities have been studied in details for instance in (Braitenberg and Schüz, 1998 ).
We consider a network of N neurons, Nα, α = 1, 2, 3 belonging to population α. We index by 1 (respectively 2, and 3) the pyramidal (respectively excitatory feedback, inhibitory interneuron) populations. We choose in each population a particular neuron indexed by iα, α = 1, 2, 3. The evolution equations of the network can be written for instance in terms of the potentials Vi1, Vi2 and Vi3 labeled in Figure 2 and these equations read:
yes
In the mean-field limit, denoting by yesα, α = 1, 2, 3 the average membrance potential of each class, we obtain the following equations:
yes
where yes is the effective interaction process associated with this problem, i.e., a Gaussian process of mean:
yes
All other mean values correspond to the non-interacting populations and are equal to 0. The covariance matrix can be deduced from Eq. 25:
yes
where
yes
This model is a voltage-based model in the sense of Ermentrout (1998) . Let us now instantiate the synaptic dynamics and compare the mean-field equations with Jansen’s population equations 2 (sometimes improperly called also mean-field equations).
The simplest model of synaptic integration is a first-order integration, which yields exponentially decaying postsynaptic potentials:
yes
Note that this is exactly Eq. 10. The corresponding g-shape satisfies the following first-order differential equation
yes
In this equation τ is the time constant of the synaptic integration and K the synaptic efficiency. The coefficients K and τ are the same for the pyramidal and the excitatory feedback population (characteristic of the pyramidal neurons and defining the g-shape g1), and different for the inhibitory population (defining the g-shape g3). In the pyramidal or excitatory (respectively the inhibitory) case we have K = K1, τ = τ1 (respectively K = K3, τ = τ3). Finally, the sigmoid functions S is given by
yes
where νmax is the maximum firing rate, and v0 is a voltage reference.
With this synaptic dynamics we obtain the first-order Jansen and Rit’s equation:
yes
The “original” Jansen and Rit’s equation (Grimbert and Faugeras, 2006 ; Jansen and Rit, 1995 ) amount considering only the mean of the process yes and assuming that yes for i, j ∈ {1, 2, 3}, i.e., that the expectation commutes with the sigmoidal function S. This is a very strong assumption, and that the fluctuations of the solutions of the mean-field equation around the mean imply that the sigmoid cannot be considered as linear in the general case.
A higher order model was introduced by van Rotterdam et al. (1982) to better account for the synaptic integration and to better reproduce the characteristics of real postsynaptic potentials. In this model the g-shapes satisfy a second-order differential equation:
yes
We recognize the g-shape defined by Eq. 11 solution of the second-order differential equation yes With this type of synaptic integration, we obtain the following mean-field equations:
yes
Here again, going from the mean-field Eq. 37 to the original Jansen and Rit’s neural mass model consists in studying the equation of the mean of the process given by Eq. 37 and commuting the sigmoidal function with the expectation.
Note that the introduction of higher order synaptic integrations results in richer behaviors. For instance, Grimbert and Faugeras (2006) showed that some bifurcations can appear in the second-order JR model giving rise to epileptic like oscillations and alpha activity, that do not appear in the first-order model.

Existence and Uniqueness of Solutions in Finite Time

The mean-field equation (Eq. 31) is an implicit equation of the stochastic process (V(t))t ≥ t0. We prove in this section that under some mild assumptions this implicit equation has a unique solution. These assumptions are the following.
Assumption 1.
(a) The matrix L(t) is C0 and satisfies ‖L(t)‖ ≤ kL for all t in [t0, T], for some matrix norm ‖ ‖ and some strictly positive constant kL.
(b) The matrix F(t) has all its singular values lowerbounded (respectively upperbounded) by the strictly positive constant 3 yes(respectively yes) for all t in [t0, T].
(c) The deterministic external input vector I(t) is bounded and we have ‖I(t)‖ ≤ Imax for all t in [t0, T] and some strictly positive constant Imax.
This solution is the fixed point in the set yes of kP-dimensional processes of an equation that we will define from the mean-field equations. We will construct a sequence of Gaussian processes and prove that it converges in distribution toward this fixed point.
We first recall some results on the convergence of random variables and stochastic processes.

Convergence of Gaussian Processes

We recall the following result from Bogachev (1998) which formalizes the intuition that a sequence of Gaussian processes converges toward a Gaussian process if and only if the means and covariance functions converge. In fact in order for this to be true, it is only necessary to add one more condition, namely that the corresponding sequence of measures (elements of yes) do not have “any mass at infinity”. This property is called uniform tightness (Billingsley, 1999 ). More precisely we have
Definition 3. (Uniform tightness). Let yes be a sequence of kP-dimensional processes defined on [t0, T] and Pn be the associated elements of yes. The sequence yes is called uniformly tight if and only if for all ε > 0 there exists a compact set K of yes such that Pn(K) > 1 − ε, n ≥ 1.
Theorem 1. Let yes be a sequence of kP-dimensional Gaussian processes defined on [t0, T] or on an unbounded interval 4 of yes. The sequence converges to a Gaussian process X if and only if the following three conditions are satisfied:
The sequence yes is uniformly tight.
• The sequence μn(t) of the mean functions converges for the uniform norm.
• The sequence Cn of the covariance operators converges for the uniform norm.
We now, as advertised, define such a sequence of Gaussian processes.
Let us fix Z0, a kP-dimensional Gaussian random variable, independent of the Brownian and of the process ((X)t)t ∈ [t0,T].
Definition 4. Let X be an element of yes and yes be the function yes such that
yes
where yes and yes are defined 5 in Section “Mean-Field Equations for Multi-Populations Neural Network Models”.
Note that, by Definition 2 the random process (yes(X))t ∈ [t0, T], k ≥ 1 is the sum of a deterministic function (defined by the external current) and three independent random processes defined by Z0, the interaction between neurons, and the external noise. These three processes being Gaussian processes, so is (yes(X))t ∈ [t0, T]. Also note that (yes(X))t0 = Z0. It should be clear that a solution yes of the mean-field equation (Eq. 31) satisfies yes(t0) = Z0 and is a fixed point of yes, i.e., yes(yes)t = yes(t).
Let X be a given stochastic process of yes such that yes (hence yes is independent of the Brownian). We define the sequence of Gaussian processes yes by:
yes
In the remaining of this section we show that the sequence of processes yes converges in distribution toward the unique fixed-point Y of yes which is also the unique solution of the mean-field equation (Eq. 31).

Existence and uniqueness of a solution for the mean-field equations

The following upper and lower bounds are used in the sequel.
Lemma 1. Consider the Gaussian process yes UX is defined in Sections “The Mean-Field Equations” and “Introduction” is the Pdimensional vector with all coordinates equal to 1. We have
yes
for all t0 ≤ t ≤ T. The maximum eigenvalue of its covariance matrix is upperbounded by yes where ‖Sβ is the supremum of the absolute value of Sβ. We also note yes
Proof. The proof is straightforward from Definition 4. yes
The proof of existence and uniqueness of solution, and of the convergence of the sequence (Eq. 38) is in two main steps. We first prove that the sequence of Gaussian processes yes, k ≥ 1 is uniformly tight by proving that it satisfies Kolmogorov’s criterion for tightness. This takes care of condition 1 in Theorem 1. We then prove that the sequences of the mean functions and covariance operators are Cauchy sequences for the uniform norms, taking care of conditions 2 and 3.

Uniform tightness

We first recall the following theorem due to Kolmogorov (Kushner, 1984 , Chapter 4.1).
Theorem 2. (Kolmogorov’s criterion for tightness). Let yes be a sequence of kP-dimensional processes defined on [t0, T]. If there exist α, β, C > 0 such that
yes
then the sequence is uniformly tight.
Using this theorem we prove that the sequence yes, k ≥ 1 satisfies Kolmogorov’s criterion for β = 4 and α ≥ 1. The reason for choosing β = 4 is that, heuristically, dW . (dt)1/2. Therefore in order to upperbound yes by a power of | t − s | ≥ 2 (hence strictly larger than 1) we need to raise yes to a power at least equal to 4. The proof itself is technical and uses standard inequalities (Cauchy–Schwarz’s and Jensen’s), properties of Gaussian integrals, elementary properties of the stochastic integral, and Lemma 1. It also uses the fact that the input current is bounded, i.e., that yes this is Assumption (c) in 1.
Theorem 3. The sequence of processes yes, k ≥ 1 is uniformly tight.
Proof. We do the proof for k = 1, the case k > 1 is similar. If we assume that n ≥ 1 and s < t we can rewrite the difference yes as follows, using property (i) in Proposition B.1 in Appendix B.
yes
The righthand side is the sum of seven terms and therefore (Cauchy–Schwarz inequality):
yes
Because ‖ΦL(t, t0) − ΦL(s, t0)‖ ≤ |t − s| ‖L ‖we see that all terms in the righthand side of the inequality but the second one involving the Brownian motion are of the order of (t − s)2. We raise again both sides to the second power, use the Cauchy–Schwarz inequality, and take the expected value:
yes
Remember that yes is a P-dimensional diagonal Gaussian process, noted Yu in the sequel, therefore:
yes
The second-order moments are upperbounded by some regular function of μ and σmax (defined in Lemma 1) and, because of the properties of Gaussian integrals, so are the fourth-order moments.
We now define B(u) = ΦL(s, u)F(u) and evaluate yes We have
yes
Because yes is by construction independent of yes if i ≠ j and yes for all i, j (property of the It integral), the last term is the sum of only three types of terms:
1. If j1 = k1 = j2 = k2 we define
yes
and, using Cauchy–Schwarz:
yes
2. If j1 = k1 and j2 = k2 but 1 ≠ j2 we define
yes
which is equal, because of the independence of yes and yes to
yes
3. Finally, if j1 = j2 and k1 = k2 but j1 ≠ k1 we define
yes
which is equal, because of the independence of yes and yes to
yes
because of the properties of the stochastic integral,yesyes hence, because of the properties of the Gaussian integrals
yes
for some positive constant k. This takes care of the terms of the form T1. Next we have
yes
which takes care of the terms of the form T2. Finally we have, because of the properties of the It integral
yes
which takes care of the terms of the form T3.
This shows that the term yes in Eq. 40 is of the order of (t − s)1+a where a ≥ 1. Therefore we have
yes
for all s, t in [t0, T], where C is a constant independent of t, s. According to Kolmogorov criterion for tightness, the sequence of processes yes is uniformly tight.
The proof for yes, k > 1 is similar. yes

The mean and covariance sequences are Cauchy sequences

Let us note μn(t) [respectively Cn(t, s)] the mean (respectively the covariance matrix) function of Xn  = yes(Xn−1), n ≥ 1. We have:
yes
where yes is given by Eq. 26. Similarly we have
yes
Note that the kP × kP covariance matrix yes has only one nonzero P × P block:
yes
According to Definition 2 we have
yes
where yes is given by Eq. 27 and Dx is defined in Proposition 1.
In order to prove our main result, that the two sequences of functions (μn) and (Cn) are uniformly convergent, we require the following four lemmas that we state without proofs, the proofs being found in Appendixes E–H. The first lemma gives a uniform (i.e., independent of n ≥ 2 and α = 1,…,kP) strictly positive lowerbound for yes In what follows we use the following notation: Let C be a symmetric positive definite matrix, we note yes its smallest eigenvalue.
Lemma 2. The following uppperbounds are valid for all n ≥ 1 and all s, t ∈ [t0, T].
yes
yes
where μ and σmax are defined in Lemma 1, yes is defined in Assumption 1.
Lemma 3. For all t ∈ [t0, T] all α = 1,…,kP, and n ≥ 1, we have
yes
where λmin is the smallest singular value of the positive symmetric definite matrix ΦL(t, t0L(t, t0)T for t ∈ [t0, T] and yes is the smallest eigenvalue of the positive symmetric definite covariance matrix yes.
The second lemma also gives a uniform lowerbound for the expression yes which appears in the definition of Cn+1 through Eqs. 43 and 27. The crucial point is that this function is O(|t − s|) which is central in the proof of Lemma 5.
Lemma 4. For all α = 1,…,kP and n ≥ 1 the quantity yes is lowerbounded by the positive symmetric function:
yes
where yes is the strictly positive lower bound, introduced in Assumption 1, on the singular values of the matrix F(u) for u ∈ [t0, T].
The third lemma shows that an integral that appears in the proof of the uniform convergence of the sequences of functions (μn) and (Cn) is upperbounded by the nth term of a convergent series.
Lemma 5. The 2n-dimensional integral
yes
where the functions ρi(ui, vi), i = 1,…,n are either equal to 1 or to yes (the function θ is defined in Lemma 4), is upperbounded by kn/(n − 1)! for some positive constant k.
With these lemmas in hand we prove Proposition 3. The proof is technical but its idea is very simple. We find upperbounds for the matrix infinite norm of Cn+1(t, s) − Cn(t, s) and the infinite norm of μn+1(t) − μn(t) by applying the mean value Theorem and Lemmas 3 and 4 to the these norms. These upperbounds involve integrals of the infinite norms of Cn(t, s) − Cn−1(t, s) and μn(t) − μn−1(t) and, through Lemma 4, one over the square root of the function θ. Proceeding recursively and using Lemma 5, one easily shows that the infinite norms of Cn+1 − Cn and μn+1 − μn are upperbounded by the nth term of a convergent series from which it follows that the two sequences of functions are Cauchy sequences, hence convergent.
Proposition 3. The sequences of covariance matrix functions Cn(t, s) and of mean functions μn(t), s, t in [t0, T] are Cauchy sequences for the uniform norms.
Proof. We have
yes
We take the infinite matrix norm of both sides of this equality and use the upperbounds yes and yes (see Appendix B) to obtain 6
yes
According to Eq. 27 we are led to consider the difference An − An−1, where:
yes
We write next:
yes
The mean value theorem yields:
yes
Using the fact that yes, we obtain:
yes
where the constants kL and yes are defined in Appendix B and
yes
A similar process applied to the mean values yields:
yes
where μ is defined in Lemma 1. We now use the mean value Theorem and Lemmas 3 and 4 to find upperbounds for yesyes and yes We have
yes
where k0 is defined in Lemma 3. Hence:
yes
Along the same lines we can show easily that:
yes
and that:
yes
where θ(u, v) is defined in Lemma 4. Grouping terms together and using the fact that all integrated functions are positive, we write:
yes
Note that, because of Lemma 3, all integrals are well-defined. Regarding the mean functions, we write:
yes
Proceeding recursively until we reach C0 and μ0 we obtain an upperbound for ‖Cn+1(t, s) − Cn(t, s)‖ (respectively for ‖μn+1(t) − μn(t)‖) which is the sum of <5n terms each one being the product of k raised to a power ≤n, times 2μmax or 2Σmax (upperbounds for the norms of the mean vector and the covariance matrix defined in Lemma 2), times a 2n-dimensional integral In given by
yes
where the functions ρi(ui, vi), i = 1,…,n are either equal to 1 or to yes. According to Lemma 5, this integral is of the order of some positive constant raised to the power n divided by (n − 1)!. Hence the sum is less than some positive constant k raised to the power n divided by (n − 1)!. By taking the supremum with respect to t and s in [t0, T] we obtain the same result for ‖Cn+1 − Cn (respectively for ‖μn+1 − μn). Since the series yes is convergent, this implies that ‖Cn+p − Cn (respectively ‖μn+p − μn) can be made arbitrarily small for large n and p and the sequence Cn (respectively μn) is a Cauchy sequence. yes

Existence and uniqueness of a solution of the mean-field equations

It is now easy to prove our main result, that the mean-field equations (Eq. 31) or equivalently (Eq. 28) are well-posed, i.e., have a unique solution.
Theorem 4. For any nondegenerate kP-dimensional Gaussian random variable Z0, independent of the Brownian, and any initial process X such that X(t0) = Z0, the map yes has a unique fixed point in yes toward which the sequence yes of Gaussian processes converges in law.
Proof. Since yes (respectively yes is a Banach space for the uniform norm, the Cauchy sequence μn (respectively Cn) of Proposition 3 converges to an element μ of yes (respectively an element C of yes. Therefore, according to Theorem 1, the sequence yes of Gaussian processes converges in law toward the Gaussian process Y with mean function μ and covariance function C. This process is clearly a fixed point of yes.
Hence we know that there exists at least one fixed point for the map yes. Assume there exist two distinct fixed points Y1 and Y2 of yes with mean functions μi and covariance functions Ci, i = 1, 2, with the same initial condition. Since for all n ≥ 1 we have yes the proof of Proposition 3 shows that yes (respectively yes) is upperbounded by the product of a positive number an (respectively bn) with ‖μ1 − μ2) (respectively with (‖C1 − C2). Since limn→∞ an = limn→∞ bn = 0 and yes, i = 1, 2 (respectively yes, i = 1, 2), this shows that μ1 = μ2 and C1 = C2, hence the two Gaussian processes Y1 and Y2 are indistinguishable. yes

Conclusion

We have proved that for any nondegenerate Gaussian initial condition Z0 there exists a unique solution of the mean-field equations. The proof of Theorem 4 is constructive, and hence provides a way for computing the solution of the mean-field equations by iterating the map yes defined in 3.2, starting from any initial process X satisfying X(t0) = Z0, for instance a Gaussian process such as an Ornstein–Uhlenbeck process. We build upon these facts in Section “Numerical Experiments”.
Note that the existence and uniqueness is true whatever the initial time t0 and the final time T.

Existence and Uniqueness of Stationary Solutions

So far, we have investigated the existence and uniqueness of solutions of the mean-field equation for a given initial condition. We are now interested in investigating stationary solutions, which allow for some simplifications of the formalism.
A stationary solution is a solution whose probability distribution does not change under the flow of the equation. These solutions have been already investigated by several authors (see Brunel and Hakim, 1999 ; Sompolinsky et al., 1998 ). We propose a new framework to study and simulate these processes. Indeed we show in this section that under a certain contraction condition there exists a unique solution to the stationary problem. As in the previous section our proof is constructive and provides a way to simulate the solutions.
Remark. The long-time mean-field description of a network is still a great endeavor in mathematics and statistical physics. In this section we formally take the mean-field equation we obtained and let t0 → −∞. This way we obtain an equation which is the limit of the mean-field equation when t0 → −∞. It means that we consider first the limit N → ∞ and then t0 → −∞. These two limits do not necessarily commute and there are known examples, for instance in spin-glasses, where they do not.
It is clear that in order to get stationary solutions, the stochastic system has to be autonomous. More precisely, we modify Assumption 1 as follows
Assumption 2.
(a) The matrixes L(t) and F(t), the input currents I(t) do not depend upon t.
(b) The real parts of the eigenvalues of L are negative:
yes
for all eigenvalues λ of L.
(c) The matrix F has full rank.
Under Assumption (a) of 2, the resolvent ΦL(t, s) is equal to eL(ts). Under Assumption (b) we only consider first-order system since otherwise the matrix L has eigenvalues equal to 0. We now prove the following proposition.
Proposition 4. Under the previous assumptions we have:
1.
yes
2. the process yes is well-defined, Gaussian and stationary when t0 → −∞.
Proof. The first point property follows from the fact that Re(λ) < −λL for all eigenvalues λ of L. This assumption also implies that there exists a norm on yesP such that
yes
and hence
yes
for some positive constant k. This implies the remaining two properties.
We now address the second point of the property. The stochastic integral yes is well-defined ∀t ≤ T and is Gaussian with 0-mean. Its covariance matrix reads:
yes
Let us assume for instance that tquotidn < t and perform the change of variable u = t − s to obtain
yes
Under the previous assumptions this matrix integral is defined when t0 → −∞ (dominated convergence theorem) and we have
yes
which is a well-defined function of tquotidn − t. yes
The second point of Proposition 4 guarantees the existence of process
yes
as the limit of the processes yes when t0 → −∞. This process is a stationary distribution of the equation:
yes
it is Gaussian, of mean yes and of covariance matrix Σ0 is equal to yes defined by Eq. 50 and which is independent of t.
We call long term mean-field equation (LTMFE) the implicit equation:
yes
where X0 is the stationary process defined by Eq. 51 and where UV(t) is the effective interaction process introduced previously.
We next define the long term function yesyes
yes
Proposition 5. The function yes is well-defined on yes.
Proof. We have already seen that the process X0 is well-defined. The term yes is also well-defined because of the assumptions on L.
Let X be a given process in yes. To prove the proposition we just have to ensure that the Gaussian process yes is well-defined. This results from the contraction assumption on L and the fact that the functions Sβ are bounded. We decompose this process into a “long memory” term yes and the interaction term from time t = 0, namely yes. This latter term is clearly well-defined. We show that the memory term is also well-defined as a Gaussian random variable.
We write this term yes and consider the second factor. This random variable is Gaussian, its mean reads yes where
yes
The integral defining the mean is well-defined because of Eq. 49 and the fact that the functions Sβ are bounded. A similar reasoning shows that the corresponding covariance matrix is well-defined. Hence the Gaussian process yes is well-defined, and hence for any processyes, the process yes(X) is well-defined. yes
We can now prove the following proposition.
Proposition 6. The mean vectors and the covariance matrices of the processes in the image of yes are bounded.
Proof. Indeed, since yes, we have:
yes
In a similar fashion the covariance matrices of the processes in the image of yes are bounded. Indeed we have:
yes
resulting in
yes yes
Lemma 6. The set of stationary processes is invariant by yes.
Proof. Since the processes in the image of yes are Gaussian processes, one just needs to check that the mean of the process is constant in time and that its covariance matrix C(s, t) only depends on t − s.
Let Z be a stationary process and Y = yes(Z). We denote by yes the mean of the process Zα(t) and by yes its covariance function. The mean of the process yes reads:
yes
and hence does not depends on time. We note μZ the mean vector of the stationary process UZ·1.
Similarly, its covariance function reads:
yes
which is clearly a function, noted yes, of t − s. Hence UZ· 1 is stationary and we denote by yes its covariance function.
It follows that the mean of Yt reads:
yes
since we proved that yes was not a function of s.
Similarly, we compute the covariance function and check that it can be written as a function of (t − s). Indeed, it reads:
yes
since the process X0 is stationary. CY (t, s) is clearly a function of t − s. Hence Y is a stationary process, and the proposition is proved. yes
Theorem 5. The sequence of processes yes is uniformly tight.
Proof. The proof is essentially the same as the proof of Theorem 3, since we can write
yes
yes(X)t appears as the sum of the random variable yes(X)0 and the Gaussian process defined by yes which is equal to Fk(X)t defined in Section “Existence and Uniqueness of Solutions in Finite Time” for t0 = 0. Therefore yes for t > 0. We have proved the uniform tightness of the sequence of processes yes in Theorem 3. Hence, according to Kolmogorov’s criterion for tightness, we just have to prove that the sequence of Gaussian random variables:
yes
is uniformly tight. Since it is a sequence of Gaussian random variables, it is sufficient to prove that their means are bounded and their covariance matrices upperbounded to obtain that for any ε > 0 there exists a compact Kε such that for any n yes, we have yes. This is a consequence of Proposition 6 for the first random variable and of the definition of X0 for the second. By Kolmogorov’s criterion the sequence of processes yes is uniformly tight. yes
In order to apply Theorem 1 we need to prove that the sequences of covariance and mean functions are convergent. Unlike the case of t0 finite, this is not always true. Indeed, to ensure existence and uniqueness of solutions in the stationary case, the parameters of the system have to satisfy a contraction condition, and Proposition 3 extends as follows.
Proposition 7. If λL defined in Eq. 48 satisfies the conditions (Eq. 53) defined in the proof, depending upon kC (defined in Eq. 45), k0, μLT and ΣLT (defined in Proposition 6)then the sequences of covariance matrix functions Cn(t, s) and of mean functions μn(t), s, t in [t0, T] are Cauchy sequences for the uniform norms.
Proof. The proof follows that of Proposition 3 with a few modifications that we indicate. In establishing the equation corresponding to Eq. 44 we use the fact that ‖ΦL(t, u)‖ ≤ ke−λL(tu) for some positive constant k and all u, t, u ≤ t. We therefore have:
yes
The rest of the proof proceeds the same way as in Proposition 3. Equations 46 and 47 become:
yes
and
yes
for some positive constant K, function of k, kC (defined in Eq. 45), and k0.
Proceeding recursively until we reach C0 and μ0 we obtain an upperbound for ‖Cn+1(t, s) − Cn(t, s)‖ (respectively for ‖μn+1(t) − μn(t)‖) which is the sum of <5n terms each one being the product of Kn, times 2 μLT or 2ΣLT, times a 2n-dimensional integral In given by:
yes
where the functions ρi(ui, vi), i = 1,…,n are either equal to 1 or to yes.
It can be shown by straightforward calculation that each sub-integral contributes at most either
yes
in the other case. Hence we obtain factors of the type
yes
where 0 ≤ p ≤ n. If λL < 1, (λL)(3n + P)/2 ≥ yes and else yes yes Since yes we obtain the two conditions
yes yes
Putting all these results together we obtain the following theorem of existence and uniqueness of solutions for the LTMFE:
Theorem 6. Under the contraction conditions (Eq. 53), the function yes has a unique solution in yeswhich is stationary, and for any process X, the sequence yesof Gaussian processes converges in law toward the unique fixed point of the function yes.
Proof. The proof is essentially similar to the one of Theorem 4. Indeed, the mean and the covariance matrixes converge since they are Cauchy sequences in the complete space of continuous functions equipped with the uniform norm. Using Theorem 1, we obtain that the sequence converges to a process Y which is necessarily a fixed point of yes. Hence we have existence of a fixed point for yes. The uniqueness comes from the results obtained in the proof of Proposition 7. The limiting process is necessarily stationary. Indeed, let X be a stationary process. Then for any n ∈ yes, the process yes will be stationary by the virtue of Lemma 6, and hence so will be the limiting process which is the only fixed point of yes. yes
Hence in the stationary case, the existence and uniqueness of a solution is not always ensured. For instance if the leaks are too small (i.e., when the time constants of the decay of the membrane potentials are too long) then the sequence can diverge or have multiple fixed points.

Numerical Experiments

Simulation Algorithm

Beyond the mathematical results, the framework that we introduced in the previous sections gives us a strategy to compute numerically the solutions of the dynamic mean-field equations. Indeed, we proved in Section “Existence and Uniqueness of Solutions in Finite Time” that under very moderate assumptions on the covariance matrix of the noise, the iterations of the map yes starting from any initial condition converge to the solution of the mean-field equations.
This convergence result gives us a direct way to compute numerically the solution of the mean-field equations. Since we are dealing with Gaussian processes, determining the law of the iterates of the map yes amounts to computing its mean and covariance functions. In this section we describe our numerical algorithm in the case of the Simple Model of Section “Example I: The Simple Model”.

Computing yes

Let X be a P-dimensional Gaussian process of mean μX = yes and covariance yes We fix a time interval [t0 = 0, T] and denote by Y the image of the process X under yes1. In the case of the simple model, the covariance of Y is diagonal. Hence in this case the expressions we obtain in Section “Existence and Uniqueness of Solutions in Finite Time” simply read:
yes
where we denoted yes the standard deviation of Xα at time s, instead of yes Thus, knowing yes s ∈ [0, t] we can compute yes using a standard discretization scheme of the integral, with a small time step compared with τα and the characteristic time of variation of the input current Iα. Alternatively, we can use the fact that yes satisfies the differential equation:
yes
and compute the solution using a Runge–Kutta algorithm (which is faster and more accurate). Note that, when all the standard deviations of the process X are null for all time t ∈ [0, T], we obtain a standard dynamical system. Nevertheless, in the general case, yes for some β’s, and the dynamical evolution of yes depends on the Gaussian fluctuations of the field X. These fluctuations must be computed via the complete equation of the covariance diagonal coefficient yes, which reads:
yes
where:
yes
Unless if we assume the stationarity of the process (see e.g., Section “The Importance of the Covariance: Simple Model, One Population”), this equation cannot be written as an ordinary differential equation. We clearly observe here the non-Markovian nature of the problem: yes depends on the whole past of the process until time t yes s.
This covariance can be split into the sum of two terms: the external noise contribution yes and the interaction between the neurons. The external noise contribution is a simple function and can be computed directly. To compute the interactions contribution to the standard deviation we have to compute the symmetric two-variables function:
yes
from which one obtains the standard deviation using the formula
yes
To compute the function yes we start from t = 0 and s = 0, where yes We only compute yes for t > s because of the symmetry. It is straightforward to see that:
yes
with
yes
Hence computing yes knowing yes amounts to computing yes Fix t ≥ 0. We have yes and
yes
This algorithm enables us to compute yes for t > s. We deduce yes for t < s using the symmetry of this function. Finally, to get the values of yes for t = s, we use the symmetry property of this function and get:
yes
These numerical schemes provide an efficient way for computing the mean and the covariance functions of the Gaussian process yes1(X) (hence its probability distribution) knowing the law of the Gaussian process X. The algorithm used to compute the solution of the mean-field equations for the general models GM1 and GMk is a straightforward generalization.

Analysis of the algorithm

Convergence rate. As proved in Theorem 4, given Z0 a nondegenerate kP-dimensional Gaussian random variable and X a Gaussian process such that X(0) = Z0, the sequences of mean values and covariance functions computed theoretically converge uniformly toward those of the unique fixed point of the map yes. It is clear that our algorithm converges uniformly toward the real function it emulates. Hence for a finite N, the algorithm will converge uniformly toward the mean and covariance matrix of the process yes.
Denote by Xf the fixed point of yesk in yes of mean yes and covariance matrix yes and by yes the numerical approximation of yes computed using the algorithm previously described, whose mean is noted yes and whose covariance matrix is noted yes The uniform error between the simulated mean after N iterations with a time step dt and the fixed point’s mean and covariance is the sum of the numerical error of the algorithm and the distance between the simulated process and the fixed point, is controlled by:
yes
where kmax = max(k, yes) and k and yes) are the constants that appear in the proof of Proposition 3 for the mean and covariance functions, and RN(x) is the exponential remainder, i.e., yes
Indeed, we have:
yes
The discretization algorithm used converges in O(dt). Let us denote by C1 the convergence constant, which depends on the sharpness of the function we approximate, which can be uniformly controlled over the iterations. Iterating the numerical algorithm has the effect of propagating the errors. Using these simple remarks we can bound the first term of the righthand side of Eq. 55, i.e., the approximation error at the Nth iteration:
yes
Because the sequence of mean values is a Cauchy sequence, we can also bound the second term of the righthand side of Eq. 55:
yes
for some positive constant k introduced in the proof of Proposition 3. The remainders sequence (Rn(k))n ≥ 0 converges fast toward 0 (an estimation of its convergence can be obtained using the fact that yes by Stirling’s formula).
Hence we have:
yes
For the covariance, the principle of the approximation is exactly the same:
yes
The second term of the righthand side can be controlled using the same evaluation by yes where yes is the constant introduced in the proof of Proposition 3, and the first term is controlled by the rate of convergence of the approximation of the double integral, which is bounded by C2(N + T) dt where C2 depends on the parameters of the system and the discretization algorithm used.
Hence we have:
yes
The expressions (Eqs. 56 and 57) are the sum of two terms, one of which is increasing with N and T and decreasing with dt and the other one decreasing in N. If we want to obtain an estimation with an error bounded by some ε > 0, we can for instance fix N such that yes and then fix the time step dt smaller than yes.
Complexity. The complexity of the algorithm depends on the complexity of the computations of the integrals. The algorithm described hence has the complexity yes.

The Importance of the Covariance: Simple Model, One Population

As a first example and a benchmark for our numerical scheme we revisit the work of Sompolinsky et al. (1998) . These authors studied the case of the simple model with one population (P = 1), with the centered sigmoidal function S(x) = tanh(gx), centered connectivity weights yes of standard deviation σ = 1 and no input (I = 0; Λ = 0). Note therefore that there is no “noise” in the system, which therefore does not match the nondegeneracy conditions of Proposition 2 and of Theorem 4. This issue is discussed below. In this case, the mean equals 0 for all t. Nevertheless, the Gaussian process is nontrivial as revealed by the study of the covariance C(t, s).

Stationary solutions

Assuming that the solution of the mean-field equation is a stationary solution with yes, Sompolinsky and his collaborators found that the covariance obeyed a second-order differential equation:
yes
This form corresponds to the motion of a particle in a potential well and it is easy to draw the phase portrait of the corresponding dynamical system. However, there is a difficulty. The potential Vq depends on a parameter q which is in fact precisely the covariance at τ = 0 (q = C(0)). In the stationary case, this covariance depends on the whole solution, and hence cannot be really considered as a parameter of the system. This is one of the main difficulties in this approach: mean-field equations in the stationary regime are self-consistent.
Nevertheless, the study of the shape of Vq, considering q as a free parameter gives us some information. Indeed, Vq has the following Taylor expansion (Vq is even because S is odd):
yes
where yes and yes〈Φ〉q being the average value of Φ under the Gaussian distribution with mean 0 and variance q = C(0).
If λ > 0, i.e., when yes then the dynamical system (Eq. 58) has a unique solution C(t) = 0, yes. This corresponds to a stable fixed point (i.e., a deterministic trajectory, μ = 0 with no fluctuations) for the neural network dynamics. On the other hand, if yes there is a homoclinic trajectory in Eq. 58 connecting the point q = C* > 0 where Vq vanishes to the point C = 0. This solution is interpreted by the authors as a chaotic solution in the neural network. A stability analysis shows that this is the only stable 7 stationary solution (Sompolinsky et al., 1998 ).
The equation for the homoclinic solution is easily found using energy conservation and the fact that Vq(q) = 0 and yes One finds:
yes
At the fourth-order in the Taylor expansion of Vq this gives
yes
Though λ depends on q it can be used as a free parameter for interpolating the curve of C(τ) obtained from numerical data.

Numerical experiments

This case is a good benchmark for our numerical procedure since we know analytically the solutions we are searching for. We expect to find two regimes. In one case the correlation function is identically 0 in the stationary regime, for sufficiently small g values or for a sufficiently small q (trivial case). The other case corresponds to a regime where C(τ) > 0 and C(τ) → 0 has τ  + ∞ (“chaotic” case). This regime requires that g be sufficiently large and that q be large too. We took τα = 0:25, σαα = 1. For these values, the change in dynamics predicted by Sompolinsky and collaborators is gc = 4.
In Sections “Existence and Uniqueness of Solutions in Finite Time” and “Existence and Uniqueness of Stationary Solutions” we have introduced the assumption of nondegeneracy of the noise, in order to ensure that the mean-field process was nondegenerate. However, in the present example, there is no external noise in the evolution, so we can observe the effects of relaxing this hypothesis in a situation where the results of Proposition 2 and of Theorem 4 cannot be applied. First, we observed numerically that, without external noise, the process could become degenerate [namely some eigenvalues of the covariance matrix Cα(t, s) become very small and even vanish.]. This has also an incidence on the convergence of the method which presents numerical instabilities, though the iterations leads to a curve which is well fitted by the theoretical results of Sompolinsky et al. (see Figure 3 ). The instability essentially disappears if one adds a small noise. But, note that in this case, the solution does not match with Sompolinsky et al. theoretical calculation (see Figure 3 ).
Figure 3. Numerical solution of the mean-field equation after 14 iterations in the chaotic case (g = 5). We clearly see the numerical instabilities in the no-noise case, which do not exist in the low-noise case.
Modulo this remark, we have first considered the trivial case corresponding to small g values. We took g = 0.5 and T = 5. We choose as initial process the stationary Ornstein–Uhlenbeck process corresponding to the uncoupled system with Λ = 0.1. We drew μα(0) randomly from the uniform distribution in [−1, 1] and vα(0) randomly from the uniform distribution in [0, 1].
Starting from this initial stationary process, we iterated the function yes1. Then, during the iterations, we set sα = 0 in order to match the conditions imposed by Sompolinsky and colleagues. We observe that the method converges toward the expected solution: the mean function converges to 0, while the variance v(t) decreases exponentially fast in time toward a constant value corresponding to the stationary regime. This asymptotic value decreases between two consecutive iterations, which is consistent with the theoretical expectation that v(t) = 0 in the stationary regime of the trivial case. Finally, we observe that the covariance C(t − s, s) stabilizes to a curve that does not depend on s and the stationary value (large t − s) converges to 0.
We applied the same procedure for g = 5 corresponding to the “chaotic” regime. The behavior was the same for μ(t) but was quite different for the covariance function C(t, s). Indeed, while in the first case the stationary value of v(t) tends to 0 with the number of iterations, in the chaotic case it stabilizes to a finite value. In the same way, the covariance C(t − s, s) stabilizes to a curve that does not depend on s. The shape of this curve can be extrapolated thanks to Sompolinsky et al. results. We observe a very good agreement with the theoretical predictions with a fit yes corresponding to the fourth expansion of Vq. Using a sixth-order expansion of yes gives a fit
yes
where ρ, K, λ are explicit functions of a, b, c, we obtain a slightly better approximation.

Mean-Field Equations for Two Populations with a Negative Feedback Loop

Let us now present a case where the fluctuations of the Gaussian field act on the dynamics of μα(t) in a nontrivial way, with a behavior strongly departing from the naive mean-field picture. We consider two interacting populations where the connectivity weights are Gaussian random variables yes for (α, β) ∈ {1, 2}2. We set Sβ(x) = tanh(gx) and Iα = 0, sα = 0, α = 1, 2.

Theoretical framework

The dynamic mean-field equation for μα(t) is given, in differential form, by:
yes
Let us denote by Gα(μ, v(t)) the function in the righthand side of the equality. Since S is odd, yes Therefore, we have Gα(0, v(t)) = 0 whatever v(t), and hence the point μ1 = 0, μ2 = 0 is always a fixed point of this equation.
Let us study the stability of this fixed point. To this purpose, we compute the partial derivatives of Gα(μ, v(t)) with respect to μβ for (α, β) ∈ {1, 2}2. We have:
yes
and hence at the point μ1 = 0, μ2 = 0, these derivatives read:
yes
where yes
In the case vα(0) = 0, J = 0, sα = 0, implying vα(t) = 0, t ≥ 0, the equation for μα reduces to:
yes
which is the standard Amari–Cohen–Grossberg–Hopfield system. This corresponds to the naive mean-field approach where Gaussian fluctuations are neglected. In this case the stability of the fixed point μ = 0 is given by the sign of the largest eigenvalue of the Jacobian matrix of the system that reads:
yes
For the sake of simplicity we assume that the two time constants τα are equal and we denote this value τ. The eigenvalues are in this case yes where λ are the eigenvalues of yes and have the form:
yes
Hence, they are complex whenever yes corresponding to a negative feedback loop between population 1 and 2. Moreover, they have a real part only if yes is nonzero (self interaction).
This opens up the possibility to have an instability of the fixed point (μ = 0) leading to a regime where the average value of the membrane potential oscillates. This occurs if yes and if g is larger than:
yes
The corresponding bifurcation is a Hopf bifurcation.
The situation is different if one takes into account the fluctuations of the Gaussian field. Indeed, in this case the stability of the fixed point μ = 0 depends on v(t). More precisely, the real and imaginary part of the eigenvalues of DG(0, v(t)) depend on v(t). Therefore, the variations of v(t) act on the stability and oscillations period of v(t). Though the evolution of μ(t), v(t) are coupled we cannot consider this evolution as a coupled dynamical system, since v(t) = C(t, t) is determined by the mean-field equation for C(t, s) which cannot be written as an ordinary differential equation. Note that we cannot assume stationarity here, as in the previous case, since μ(t) depends on time for sufficiently large g. This opens up the possibility of having complex dynamical regimes when g is large.

Numerical experiments

We have considered the case yes giving a Hopf bifurcation for gc = 2 when J = 0 (Figure 4 ). The trajectory of μ1(t) and v1(t) is represented in Figure 4 in the case g = 3. When J = 0, μ1(t) presents regular oscillations (with non-linear effects since g = 3 is larger than the critical value for the Hopf bifurcation, gc = 2). In this case, the solution v1(t) = 0 is stable as seen on the figure. When J ≠ 0 the Gaussian field has (small) fluctuations which nevertheless strongly interact with the dynamics of μ1(t), leading to a regime where μ1(t) and v1(t) oscillate periodically.
Figure 4. Evolution of the mean µ1(t) and variance yes1(t) for the mean-field of population 1, for J = 0 and 2, over a time window [0, 20]. n is the number of iterations of yes1 defined in Section “Existence and Uniqueness of Solutions in Finite Time”. This corresponds to a number of iterations for which the method has essentially converged (up to some precision). Note that yes1(t) has been magnified by a factor of 100. Though Gaussian fluctuations are small, they have a strong influence on μ1(t).

Discussion

The problem of bridging scales is overwhelming in general when studying complex systems and in particular in neuroscience. After many others we looked at this difficult problem from the theoretical and numerical viewpoints, hoping to get closer to its solution from relatively simple and physically/biologically plausible first principles and assumptions. One of our motivations is to better understand such phenomenological neural mass models as that of Jansen and Rit (1995) .
We consider several populations of neurons and start from a microscopic, i.e., individual, description of the dynamics of the membrane potential of each neuron that contains four terms.
The first one controls the intrinsic dynamics of the neuron. It is linear in this article but this assumption is not essential and could probably be safely removed if needed.
The second term is a stochastic input current, correlated or uncorrelated. The corresponding noise model is very rich, depending on the degree k of smoothness of the g-shapes. It features integrated Brownian noise up to order k − 1.
The third term is a deterministic input current, and the fourth one describes the interaction between the neurons through random connectivity coefficients that weigh the contributions of other neurons through a set of functions that are applied to their membranes potentials. The only hypothesis on these functions is that they are smooth and bounded, as well as their first-order derivative. The obvious choice of sigmoids is motivated by standard rate models ideas. Another appealing choice is a smooth approximation to a Dirac delta function thereby opening a window on the world of spiking neurons. Thus, the model presented in this paper is more general than the instantaneous rate model that is underlying Ermentrout’s voltage-based model (Ermentrout, 1998 ) even though we have not explored this avenue.
We then derive the mean-field equations and provide a constructive and new proof, under some mild assumptions, of the existence and uniqueness of a solution of these equations over finite and infinite time intervals. The key idea is to look at this mean-field description as a global problem on the probability distribution of the membranes potentials, unlike previous studies. Our proof provides an efficient way of computing this solution and our numerical experiments show a good agreement with previous studies. It is interesting to note that a sufficient condition for the convergence of our algorithm is related to the previously mentioned noise model. We prove that if the noise matrix F is full rank, with bounded eigenvalues, then the algorithm is in general convergent. An important fact to note is that the solutions of the mean-field equations that we construct are fundamentally non-Markovian eliminating the need for such approximations as the introduction of the q parameter summarizing the whole history of the non-Markovian process, see below.
In the case where the nonlinearities are chosen to be sigmoidal our results shed a new light on existing neural mass models. Indeed, as shown in Section “General Derivation of the Mean-Field Equation”, these appear as approximations of the mean-field equations where the intricate but fundamental coupling between the time variations of the mean membrane potentials and their fluctuations, as represented by the covariance functions, is neglected.
An alternative approach has been recently proposed by Chizhov and collaborators 8 (Chizhov and Graham, 2007 ; Chizhov et al., 2007 ). The approach of these authors consists in reducing the large number, N, of degrees of freedom of the neural assembly by constructing a probability density ρ on the phase space of neurons states in the limit N → ∞. This is a non-rigorous approach where the evolution equations for ρ are heuristically derived. Especially, it is assumed that ρ depends on two parameters only: the current time t and the time elapsed since the last spike t*. Under these assumptions the initial phase space of neurons states is mapped to a two dimensional space t, t*, while ρ(t, t*)dt characterizes the fraction of neurons which have fired in the time interval [t − t*, t − t* + dt]. Therefore, this approach intrinsically holds for integrate and fire neurons models where the neuron’s membrane potential history is summarized by the last spike time, when it is reset to a constant value. As noticed by these authors, this allows to circumvent the main problem in mean-field approaches for firing rate models, that we also discuss in the present paper: When using mean-field theory to characterize stationary regimes, one needs to introduce ad hoc parameters (see e.g., the parameter q introduced in Section “Stationary Solutions”) summarizing the whole history of the non-Markovian process. Introducing a “history cut-off” while resetting the membrane potential to a constant value indeed removes this difficulty. Therefore, it might be interesting to compare our approach in the case of integrate-and-fire models (see above remark on the choice of the nonlinearity), to the approach of Chizhov and collaborators. This could provide some rigorous basis for their analysis and allow to elucidate the role of field fluctuations which does not appear explicitly in the probability density approach.

Conclusion and Further Work

On more general grounds, our goal is now to extend the present work in several directions.

Bifurcations Analysis of the Dynamic Mean-Field Equations

From the present analysis, and as shown in the simple examples of Section “Numerical Experiments”, the mesoscopic dynamics of the average membrane potential of a neurons population can be really different from the classical phenomenological equations la Jansen–Rit if one includes the non-Markovian fluctuations of the interaction fields, which summarize the cumulative effects of the nonlinear interactions of a given neuron with the bulk of other neurons. Jansen–Rit equations are commonly used in the neuroscience community either to anticipate the dynamics of local field potential in relation with imaging (Optical Imaging, MEGEEG), or to understand neurophysiological disorders such as epilepsy. Bifurcations analysis of these equations reveal dynamical regimes that can be related to experiments (Grimbert and Faugeras, 2006 ). They can be generalized using more accurate neural models (Wendling et al., 2005 ). Is there any need to generalize these equations, that we claim to be incomplete, while people commonly use them with some satisfaction? Are there new phenomena, experimentally accessible, that can be exhibited by the generalized mean-field equations and that do not appear in the naive ones? These are obviously important questions that we intend to address in the near future. On mathematical grounds, the goal is to make a bifurcation analysis of the map yes on the space of trajectories, introduced in the present paper. Do any new salient dynamical regimes appear? If such regimes exist, the goal will be, on experimental grounds, to interact with experimentalists in order to see in which conditions such a regime can be exhibited, and what are its implications on cortical columns dynamics or function.

Investigations of Nonstationary Regimes

As discussed in this paper, and as is well-known in the physicists’ community (especially spin-glasses community), the dynamic mean-field approach raises serious difficulties as far as one is trying to describe stationary dynamics. On technical grounds, this relies on the non-commutativity of the two limits N → ∞ and t → ∞ already discussed in Sompolinsky and Zippelius (1982) . As a result, one is led to introduce ad hoc phenomenological parameters, depending on initial conditions, that can be determined in statistical physics models where the distribution of equilibria is known (Gibbs distribution), using sophisticated techniques such as the replica “trick” (Houghton et al., 1983 ). For spin-glasses it is only in the high temperature regime that a simple solution to this problem is known. This restriction also appears in the present paper, where the existence and uniqueness of a stationary solution is proved only for low values of the gain parameter g (which plays a role similar to the inverse temperature). However, we are not so much interested in stationary dynamics, since brain processes are ultimately nonstationary. Our approach, valid for any finite time T, opens up the possibility to characterize mean-field equations in transient regimes, with an analysis strategy that can moreover be easily implemented. To the best of our knowledge, this type of techniques has never been used in the statistical physics community, where iterations on space trajectories are not in the standard toolbox. Therefore, our work could allow the (numerical) investigation of cortical columns submitted to nonstationary inputs, with strong implications on neuroscience.

Extension to A Larger Class of Models

A very challenging question is the application of this theory to spiking neurons models. We have briefly mentioned in Section “Discussion” that this may be possible through the use of non-sigmoidal functions in the interaction terms. This idea could be applied to the analysis of Integrate and Fire models with conductance based synapses, which constitute good models of spiking neurons. As discussed at the end of Section “Discussion”, the analysis of the mean-field equations could be simplified by the fact that memory is reset after a neuron fires. There is however a need to characterize parameter space regions where neurons can take an arbitrary large time to fire for the first time (Cessac, 2008 ; Cessac and Viéville, 2008 ). This is the main obstruction in the application of our theory to this type of models.

Appendix

A. Identification of the Mean-Field Equations

Ben-Arous and Guionnet studied from a mathematical point of view the problem of finding a mean-field description of large networks of spin-glasses. They obtained using different methods of stochastic analysis a weak limit of the law of a given spin and proved their independence.
Our equations do not directly fit in their study: indeed, the spin intrinsic dynamics is nonlinear while the interaction is linear, and everything in done in dimension one. Nevertheless, their proof extends to our case which is somehow more simple. For instance in the case of the Simple Model with one population, we can readily adapt their proof in our case. More precisely, let P = 1, the equation of the network reads:
yes
In this case, we define for yes the effective interaction term yes which is the effective interaction process defined in Section “The Mean-Field Equations”, i.e., the Gaussian process of mean yes and of covariance: yes
Let us note yes the law of the membrane potential when there is no interaction (it is an Ornstein–Uhlenbeck process), and the empirical measure yes We can prove that under the probability distribution averaged over the connectivities, see below, the empirical measure satisfies a large deviation principle with good rate function H defined as in Guionnet (1997) . Using this large deviation result, we can prove annealed and quenched tightness of the empirical measure, and finally its convergence toward the unique process where the good rate function H achieves its unique minimum, which is defined by the property of having a density with respect to yes and whose density satisfies the implicit equation:
yes
where ε denotes the expectation over the effective interaction process UQ.
We can also prove following the steps of Ben-Arous and Guionnet (1997) that there exists a unique solution to this equation, and that this solution satisfies the nonlinear non-Markovian stochastic differential equation:
yes
which can also be written as our mean-field equation, averaged on the connectivities (see Ben-Arous and Guionnet, 1995 ). More precisely, let LV be the law of the solution of the equation:
yes
which is exactly Eq. 33. They prove that V satisfies the nonlinear equation:
yes
This result is probably extendable to the multi-population case using the multidimensional Girsanov’s theorem, but the corresponding mathematical developments are out of the scope of this paper.

B. The Resolvent

In this appendix we introduce and give some useful properties of the resolvent ΦL of a homogeneous differential equation:
yes
where yes.
Definition B.1. The resolvent of Eq. 61 is defined as the unique solution of the linear equation:
yes
where IdP is the P × P identity matrix.
Proposition B.1. The resolvent satisfies the following properties:
(i) ΦL(t + s, t0) = ΦL(t + s, t) · ΦL(t, t0)
(ii) ΦL(t, t0) is invertible of inverse ΦL(t0, t) which satisfies:
yes
(iii) Let ‖ ‖ be a norm on yesP × P and assume that yes on [t0, T]. Then we have:
yes
Similarly, if yes on [t0, T] we have:
yes
(iv) We have
yes
Proof. The properties (i) and (ii) are directly linked with the property of group of the flow of a reversible ODE. (iii) is an application of Grunwald’s lemma. (iv) is obtained by a first-order Taylor series expansion. yes
Theorem B.2 (Solution of an inhomogeneous linear SDE). The solution of the inhomogeneous linear stochastic differential equation:
yes
can be written using the resolvent:
yes
Proof. Pathwise (strong) uniqueness of solution directly comes from the results on the SDE with Lipschitz coefficients (see e.g., Karatzas and Shreve, 1991 , Theorem 2.5 of Chapter 5). It is clear that Xt0 = X0. We use Itô’s formula for the product of two stochastic processes to prove that the process Eq. 67 is solution of Eq. 66:
yes
yes
Hence the theorem is proved. yes

C. Matrix Norms

In this section we recall some definitions on matrix and vector norms. Let yesn × n be the set of n × n real matrices. It is a vector space of dimension n2 and the usual Lp norms 1 ≤ p ≤ ∞ can be defined. Given L ∈ yesn × n, we note yes the corresponding norm. Given a vector norm, noted ‖ ‖, on yesn the induced norm, noted ‖ ‖, on yesn × n is defined as
yes
Since yesn×n is finite dimensional all norms are equivalent. In this article we use the following norms
(i) yes
(ii) yes
(iii) yes This so-called spectral norm is equal to the square root of the largest singular value of L which is the largest eigenvalue of the positive matrix LTL. If L is positive definite this is its largest eigenvalue which is also called its spectral radius, noted ρ(L).

D. Important Constants

Table 1 summarizes some notations which are introduced in the article and used in several places.
yes

E. Proof of Lemma 2

Lemma E.1. The following upperbounds are valid for all n ≥ 1 and all s, t ∈ [t0, T].
yes
yes
where μ and σmax are defined in Lemma 1, yes is defined in Assumption 1.
Proof. The first inequality follows from taking the infinite norm of both sides of Eq. 41 and using Assumption (a) in 1 and Eq. 64, Lemma 1, and Assumption (c) in 1.
The second inequality follows from taking the infinite norm of both sides of Eq. 42 and using Assumption (a) in 1 and Eqs. 64 and 65, Lemma 1, and Assumption (b) in 1.

F. Proof of Lemma 3

Lemma F.1. For all t ∈ [t0, T] all α = 1,…,kP, and n ≥ 1, we have
yes
where λmin is the smallest singular value of the symmetric positive definite matrix ΦL(t, t0L(t, t0)T for t ∈ [t0, T] and yes is the smallest eigenvalue of the symmetric positive definite covariance matrix yes
Proof. yes is larger than yes which is larger than the smallest eigenvalue of the matrix yes This smallest eigenvalue is equal to
yes
In the last expression the first term is larger than the smallest eigenvalue yes of the matrix yes which is positive definite since we have assumed the Gaussian random variable Z0 nondegenerate. The second term is equal to the smallest singular value yes of the matrix yes which is also strictly positive since yes is invertible for all t ∈ [t0, T], see Appendix B.yes

G. Proof of Lemma 4

Lemma G.1. For all α = 1,…,kP and n ≥ 1 the quantity yes is lowerbounded by the positive symmetric function:
yes
where yes is the strictly positive lower bound, introduced in 3.1, on the singular values of the matrix F(u) for u ∈ [t0, T].
Proof. We use Eq. 42 which we rewrite as follows, using the group property of the resolvent ΦL:
yes
We now assume s < t and introduce the following notations:
yes
yes
Let eα, α = 1,…,kP, be the unit vector of the canonical basis whose coordinates are all equal to 0 except the αth one which is equal to 1. We note Eα(t) the vector yes We have, dropping the index n for simplicity:
yes
Note that the last expression does not depend on s, since A(s) + B(s, t) = A(t), which is consistent with the first equality. The reason why we introduce s in this expression is to simplify the following calculations.
The expression yes is the sum of four sub-expressions:
yes
which is ≥0 because A(s) is a covariance matrix,
yes
which is also ≥0 because a(t, s) is a covariance matrix function,
yes
Because a(t, s) is a covariance matrix function, we have
yes
and, as seen above, yes Because yes we also have
yes
and, as it can be readily verified, this implies yes
Therefore, we can lowerbound yes by the fourth subexpression:
yes
since B(s, t) and a(s, s) are covariance matrixes. We next have
yes
by definition of yes Therefore
yes
where yes is the smallest eigenvalue of the symmetric positive matrix C. Similarly we have
yes
Let us write γ(u) = F(u)F(u)T. We have (Assumption 1):
yes
Combining these results, we have
yes yes

H. Proof of Lemma 5

Lemma H.1. The 2n-dimensional integral
yes
where the functions ρi(ui, vi), i = 1,…,n are either equal to 1 or to yes (the function θ is defined in Lemma 4), is upperbounded by kn/(n − 1)! for some positive constant k.
Proof. First note that the integral is well-defined because of Lemma 4. Second, note that there exists a constant K such that
yes for all (u, v) ∈ [t0, t yes s]2,
i.e., yes. Therefore the integral is upperbounded by yes where K0 = max(1, K) times the integral obtained when yes for all i = 1,…,n. Let us then consider this situation. Without loss of generality we assume t0 = 0. The cases n = 1, 2, 3 allow one to understand the process.
yes
Let us rotate the axes by yes by performing the change of variables
yes
Using the symmetry of the integrand in s and t and the change of variable, the integral in the righthand side of Eq. 68 is equal to (see Figure 5 ):
Figure 5. The change of coordinates.
yes
where yes and yes
Let us now look at I2. It is upperbounded by the factor yes times the integral
yes yes
Since in the area of integration yes we are led to the product of 2/5 by the one-dimensional integral
yes
where yes
Similarly I3 is upperbounded by the product of yes times the integral
yes
where yes One easily shows then that:
yes
It can be verified by using a system for symbolic computation that 0 < αi < 1 for all i ≥ 1. One also notices that
yes
therefore
yes
and this finishes the proof.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgement

This research was partly supported by funding of the European Union under the grant no. 15879 (FACETS) and the Fondation d’Entreprise EADS. It was also supported by the MACACC ARC INRIA and the Doeblin CNRS Federation.

Footnotes

  1. ^ When we come to the mean-field equations they will be modeled as random variables.
  2. ^ We have modified the original model which is not voltage-based.
  3. ^ We note Γ(t) the matrix F(t)F(t)T.
  4. ^ In Bogachev (1998 ; Chapter 3.8), the property is stated whenever the mean and covariance are defined on a separable Hilbert space.
  5. ^ For simplicity we abuse notations and identify yes and X.
  6. ^ The notation yes is introduced in Appendix C.
  7. ^ More precisely, this is the only minimum for the large deviation functional.
  8. ^ We thank one of the referees for pointing out these references to us.

References

Abbott, L. F., and Van Vreeswijk, C. A. (1993). Asynchronous states in networks of pulse-coupled neuron. Phys. Rev. A 48, 1483–1490.
Amari, S.-I. (1972). Characteristics of random nets of analog neuron-like elements. IEEE Trans. Syst. Man Cybern. 2, 643–657.
Amari, S.-I., Yoshida, K., and Kanatani, K.-I. (1977). A mathematical foundation for statistical neurodynamics. SIAM J. Appl. Math. 33, 95–126.
Ben-Arous, G., and Guionnet, A. (1995). Large deviations for Langevin spin glass dynamics. Probab. Theory Relat. Fields 102, 455–509.
Ben-Arous, G., and Guionnet, A. (1997). Symmetric Langevin spin glass dynamics. Ann. Probab. 25, 1367–1422.
Billingsley, P. (1999). Convergence of Probability Measures (Wiley Series in Probability and Statistics). New York, NY, John Wiley and Sons, Inc.
Bogachev, V. I. (1998). Gaussian Measures. Providence, RI, American Mathematical Society.
Braitenberg, V., and Schüz, A. (1998). Cortex: Statistics and Geometry of Neuronal Connectivity, 2nd Edn. Berlin, Springer.
Brunel, N., and Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 11, 1621–1671.
Cessac, B. (1995). Increase in complexity in random neural networks. J. Phys. I (France) 5, 409–432.
Cessac, B. (2008). A discrete time neural network model with spiking neurons: rigorous results on the spontaneous dynamics. J. Math. Biol. 56, 311–345.
Cessac, B., and Viéville, T. (2008). On dynamics of integrate-and-fire neural networks with adaptive conductance based synapses. Front. Neurosci. 2, 2.
Chizhov, A. V., and Graham, L. J. (2007). Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 75, 011924-1–011924-14.
Chizhov, A. V., Rodrigues, S., and Terry, J. R. (2007). A comparative analysis of a firing-rate model and a conductance-based neural population model. Phys. Lett. A 369, 31–36.
Crisanti, A., Sommers, H. J., and Sompolinsky, H. (1990). Chaos in neural networks: chaotic solutions.
Crisanti, A., and Sompolinsky, H. (1987a). Dynamics of spin systems with randomly asymmetric bonds: Langevin dynamics and a spherical model. Phys. Rev. A 36, 4922–4939.
Crisanti, A., and Sompolinsky, H. (1987b). Dynamics of spin systems with randomly asymmetric bonds: Ising spins and Glauber dynamics. Phys. Rev. A 37, 4865.
Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA, MIT Press.
Ermentrout, B. (1998). Neural networks as spatio-temporal pattern-forming systems. Rep. Prog. Phys. 61, 353–430.
Freeman, W. J. (1975). Mass Action in the Nervous System. New York, Academic Press.
Gerstner, W. (1995). Time structure of the activity in neural network models. Phys. Rev. E 51, 738–758.
Gerstner, W., and Kistler, W. M. (2002). Mathematical formulations of Hebbian learning. Biol. Cybern. 87, 404–415.
Grimbert, F., and Faugeras, O. (2006). Bifurcation analysis of Jansen’s neural mass model. Neural Comput. 18, 3052–3068.
Guionnet, A. (1997). Averaged and quenched propagation of chaos for spin glass dynamics. Probab. Theory Relat. Fields 109, 183–215.
Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two state neurons. Proc. Natl. Acad. Sci. USA 81, 3088–3092.
Houghton, A., Jain, S., and Young, A. P. (1983). Role of initial conditions in the mean-field theory of spin-glass dynamics. Phys. Rev. B Condens. Matter 28, 2630–2637.
Jansen, B. H., and Rit, V. G. (1995). Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol. Cybern. 73, 357–366.
Karatzas, I., and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics, Vol. 113, 2nd Edn. New York, Springer-Verlag.
Kushner, H. J. (1984). Approximation and Weak Convergence Methods for Random Processes with Applications to Stochastic Systems Theory. Cambridge, MA, MIT Press.
Mattia, M., and Del Giudice, P. (2002). Population dynamics of interacting spiking neurons. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 66, 51917.
Molgedey, L., Schuchardt, J., and Schuster, H.G. (1992). Suppressing chaos in neural networks by noise. Phys. Rev. Lett. 69, 3717–3719.
Moynot, O., and Samuelides, M. (2002). Large deviations and mean field theory for asymmetric random recurrent neural networks. Probab. Theory Relat. Fields 123, 41–75.
Samuelides, M., and Cessac, B. (2007). Random recurrent neural networks. Eur. Phys. J. Spec. Top. 142, 7–88.
Sompolinsky, H., Crisanti, A., and Sommers, H. J. (1998). Chaos in random neural networks. Phy. Rev. Lett. 61, 259–262.
Sompolinsky, H., and Zippelius, A. (1982). Relaxational dynamics of the Edwards–Anderson model and the mean-field theory of spin-glasses. Phys. Rev. B 25, 6860–6875.
Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Netw. Comput. Neural Syst. 4, 259–284.
van Rotterdam, A., Lopes da Silva, F. H., van den Ende, J., Viergever, M. A., and Hermans, A. J. (1982). A model of the spatial–temporal characteristics of the alpha rhythm. Bull. Math. Biol. 44, 283–305.
Wendling, F., Hernandez, A., Bellanger, J.J., Chauvel, P., and Bartolomei, F. (2005). Interictal to ictal transition in human temporal lobe epilepsy: insights from a computational model of intracerebral EEG. J. Clin. Neurophysiol. 22, 343–356.
Keywords:
mean-field analysis, stochastic processes, stochastic differential equations, stochastic networks, stochastic functional equations, random connectivities, multi-populations networks, neural mass models
Citation:
Faugeras O, Touboul J and Cessac B (2009). A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front. Comput. Neurosci. 3:1. doi: 10.3389/neuro.10.001.2009
Received:
04 August 2008;
 Paper pending published:
06 October 2008;
Accepted:
26 January 2009;
 Published online:
18 February 2009.

Edited by:

David Hansel, University of Paris, France

Reviewed by:

Claude Meunier, René Descartes University, France
Germán Mato, CONICET - Centro Atómico Bariloche - Insituto Balseiro, Argentina
Copyright:
© 2009 Faugeras, Touboul and Cessac. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution and reproduction in any medium, provided the original authors and source are credited.
*Correspondence:
Olivier Faugeras, NeuroMathComp Laboratory, INRIA/ENS, 2004 Route des Lucioles, 06902 Sophia-Antipolis, France. e-mail: olivier.faugeras@sophia.inria.fr