# A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs

1

NeuroMathComp Laboratory, INRIA/ENS, France

2

Laboratoire Jean-Alexandre Dieudonné, Nice, France

3

Université de Nice, Nice, France

We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995)
: their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.

Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity (“microscopic” scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50ÃÂ μm to 1ÃÂ mm, containing of the order of 100–100000 neurons belonging to a few different species. These columns have specific functions. For example, in the visual cortex V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the collective dynamics of many interacting neurons and resulting in a mesoscopic signal. The description of this collective dynamics requires models which are different from individual neurons models. In particular, if the accurate description of one neuron requires “

*m*” parameters (suchÃÂ as sodium, potassium, calcium conductances, membrane capacitance, etc…), it is not necessarily true that an accurate mesoscopic description of an assembly of*N*neurons requires*Nm*parameters. Indeed, when*N*is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous. This is in effect for instance the case of cortical columns.However, obtaining the evolution equations of the effective mean-field from microscopic dynamics is far from being evident. In simple physical models this can be achieved via the law of large numbers and the central limit theorem, provided that time correlations decrease sufficiently fast. This type of approach has been generalized to such fields as quantum field theory or non equilibrium statistical mechanics. To the best of our knowledge, the idea of applying mean-field methods to neural networks dates back to Amari (Amari, 1972
; Amari et al., 1977
). In his approach, the author uses an assumption that he called the “local chaos hypothesis”, reminiscent of Boltzmann’s “molecular chaos hypothesis”, that postulates the vanishing of individual correlations between neurons, when the number

*N*of neurons tends to infinity. Later on, Sompolinsky et al. (1998) used a dynamic mean-field approach to conjecture the existence of chaos in an homogeneous neural network with random independent synaptic weights. This approach was formerly developed by Sompolinsky and colleagues for spin-glasses (Crisanti and Sompolinsky, 1987a ,b ; Sompolinsky and Zippelius, 1982 ), where complex effects such as aging or coexistence of a diverging number of metastable states, renders the mean-field analysis delicate in the long time limit (Houghton et al., 1983 ).On the opposite, these effects do not appear in the neural network considered in Sompolinsky et al. (1998)
because the synaptic weights are independent (Cessac, 1995
) (and especially non symmetric, in opposition to spin-glasses). In this case, the Amari approach and the dynamic mean-field approach lead to the same mean-field equations. Later on, the mean-field equations derived by Sompolinsky and Zippelius (1982)
for spin-glasses were rigorously obtained by Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995
, 1997
; Guionnet, 1997
). The application of their method to a discrete time version of the neural network considered in Sompolinsky et al. (1998) and in Molgedey et al. (1992)
was done by Moynot and Samuelides (2002)
.

Mean-field methods are often used in the neural network community but there are only a few rigorous results using the dynamic mean-field method. The main advantage of dynamic mean-field techniques is that they allow one to consider neural networks where synaptic weights are random (and independent). The mean-field approach allows one to state general and generic results about the dynamics as a function of the statistical parameters controlling the probability distribution of the synaptic weights (Samuelides and Cessac, 2007
). It does not only provide the evolution of the mean activity of the network but, because it is an equation on the law of the mean-field, it also provides information on the fluctuations around the mean and their correlations. These correlations are of crucial importance as revealed in the paper by Sompolinsky et al. (1998)
. Indeed, in their work, the analysis of correlations allows them to discriminate between two distinct regimes: a dynamics with a stable fixed point and a chaotic dynamics, while the mean is identically 0 in the two regimes.

However, this approach has also several drawbacks explaining why it is so seldom used. First, this method uses a generating function approach that requires heavy computations and some “art” for obtaining the mean-field equations. Second, it is hard to generalize to models including several populations. Finally, dynamic mean-field equations are usually supposed to characterize in fine a stationary process. It is then natural to search for stationary solutions. This considerably simplifies the dynamic mean-field equations by reducing them to a set of differential equations (see Section “Numerical Experiments”) but the price to pay is the unavoidable occurrence in the equations of a non free parameter, the initial condition, that can only be characterized through the investigation of the nonstationary case.

Hence it is not clear whether such a stationary solution exists, and, if it is the case, how to characterize it. To the best of our knowledge, this difficult question has only been investigated for neural networks in one paper by Crisanti et al. (1990)
.

Different alternative approaches have been used to get a mean-field description of a given neural network and to find its solutions. In the neuroscience community, a static mean-field study of multi-population network activity was developed by Treves (1993)
. This author did not consider external inputs but incorporated dynamical synaptic currents and adaptation effects. His analysis was completed in Abbott and Van Vreeswijk (1993)
, where the authors considered a unique population of nonlinear oscillators subject to a noisy input current. They proved, using a stationary Fokker–Planck formalism, the stability of an asynchronous state in the network. Later on, Gerstner (1995)
built a new approach to characterize the mean-field dynamics for the Spike Response Model, via the introduction of suitable kernels propagating the collective activity of a neural population in time.

Brunel and Hakim (1999)
considered a network composed of integrate-and-fire neurons connected with constant synaptic weights. In the case of sparse connectivity, stationarity, and considering a regime where individual neurons emit spikes at low rate, they were able to study analytically the dynamics of the network and to show that the network exhibited a sharp transition between a stationary regime and a regime of fast collective oscillations weakly synchronized. Their approach was based on a perturbative analysis of the Fokker–Planck equation. A similar formalism was used in Mattia and Del Giudice (2002)
which, when complemented with self-consistency equations, resulted in the dynamical description of the mean-field equations of the network, and was extended to a multi-population network.

Finally, Chizhov and Graham (2007)
have recently proposed a new method based on a population density approach allowing to characterize the mesoscopic behavior of neuron populations in conductance-based models. We shortly discuss their approach andÃÂ compare it to ours in Section “Discussion”.

In the present paper, we investigate the problem of deriving the equations of evolution of neural masses at mesoscopic scales from neurons dynamics, using a new and rigorous approach based on stochastic analysis.

The article is organized as follows. In Section “Mean-Field Equations for Multi-Populations Neural Network Models” we derive from first principles the equations relating the membrane potential of each of a set of neurons as function of the external injected current and noise and of the shapes and intensities of the postsynaptic potentials in the case where these shapes depend only on the postsynaptic neuron (the so-called voltage-based model). Assuming that the shapes of the postsynaptic potentials can be described by linear (possibly time-dependent) differential equations we express the dynamics of the neurons as a set of stochastic differential equations. Assuming that the synaptic connectivities between neurons satisfy statistical relationship only depending on the population they belong to, we obtain the mean-field equations summarizing the interactions of the

*P*populations in the limit where the number of neurons tend to infinity. These equations can be derived in several ways, either heuristically following the lines of Amari (Amari, 1972 ; Amari et al., 1977 ), Sompolinsky (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ), and Cessac (Cessac, 1995 ; Samuelides and Cessac, 2007 ), or rigorously as in the work of Ben-Arous and Guionnet ( Ben-Arous and Guionnet, 1995 , 1997 ; Guionnet, 1997 ). The purpose of this article is not the derivation of these mean-field equations but to prove that they are well-posed and to provide an algorithm for computing their solution. Before we do this we provide the reader with two important examples of such mean-field equations. The first example is what we call the simple model, a straightforward generalization of the case studied by Amari and Sompolinsky. The second example is a neuronal assembly model, or neural mass model, as introduced by Freeman (1975) and exemplified in Jansen and Rit’s (1995) cortical column model.In Section “Existence and Uniqueness of Solutions in Finite Time” we consider the problem of solutions over a finite time interval [

*t*_{0},*T*]. We prove, under some mild assumptions, the existence and uniqueness of a solution of the dynamic mean-field equations given an initial condition at time*t*_{0}. The proof consists in showing that a nonlinear equation defined on the set of multidimensional Gaussian random processes defined on [*t*_{0},*T*] has a fixed point. We extend this proof in Section “Existence and Uniqueness of Stationary Solutions” to the case of stationary solutions over the time interval [−∞,*T*] for the simple model. Both proofs are constructive and provide an algorithm for computing numerically the solutions of the mean-field equations.We then study in Section “Numerical Experiments” the complexity and the convergence rate of this algorithm and put it to good use: We first compare our numerical results to the theoretical results of Sompolinsky and colleagues (Crisanti et al., 1990
; Sompolinsky et al., 1998
). We then provide an example of numerical experiments in the case of two populations of neurons where the role of the mean-field fluctuations is emphasized.

Along the paper we introduce several constants. To help the reader we have collected in Table 1
of Appendix D, the most important ones and the place where they are defined in the text.

In this section we introduce the classical neural mass models and compute the related mean-field equations they satisfy in the limit of an infinite number of neurons.

### The General Model

#### General framework

We consider a network composed of

*N*neurons indexed by*i*ÃÂ ∈ÃÂ {1,…,*N*} belonging to*P*populations indexed by αÃÂ ∈ÃÂ {1,…,*P*} such as those shown in Figure 1 . Let*N*_{α}be the number of neurons in population α. We have_{}. We define the population which the neuron*i*,*i*ÃÂ =ÃÂ 1,…,*N*belongs to.**FigureÃÂ 1. General network considered:**

**N**neurons belonging to**P**populations are interconnected with random synaptic weights whose probability distributions only depend upon the population indexes, see text.**Definition 1.**The function

*p*: {1,…,

*N*}ÃÂ →ÃÂ {1,…,

*P*} associates to each neuron

*i*ÃÂ ∈ÃÂ {1,…,

*N*}, the population αÃÂ =ÃÂ

*p*(

*i*)ÃÂ ∈ÃÂ {1,…,

*P*}, it belongs to.

We consider that each neuron

*i*is described by its membrane potential*V*(_{i}*t*), and the related instantaneous firing rate is deduced from it through a relation of the form ν*(*_{i}*t*)ÃÂ =ÃÂ*S*(_{i}*V*(_{i}*t*)) (Dayan and Abbott, 2001 ; Gerstner and Kistler, 2002 ), where*S*is a sigmoidal function._{i}A single action potential from neuron

*j*generates a postsynaptic potential PSP*(*_{ij}*u*) on the postsynaptic neuron*i*, where*u*is the time elapsed after the spike is received. We neglect the delays due to the distance traveled down the axon by the spikes.Assuming that the postsynaptic potentials sum linearly, the average membrane potential of neuron

*i*iswhere the sum is taken over the arrival times of the spikes produced by the neurons

*j*after some reference time*t*_{0}. The number of spikes arriving between*t*and*t*ÃÂ +ÃÂ*dt*is ν*(*_{j}*t*)*dt*. Therefore we haveor, equivalently

The PSP

*s can depend on several variables in order to account for instance for adaptation or learning.*_{ij}We now make the simplifying assumption that the shape of the postsynaptic potential PSP

*only depends on the postsynaptic population, which corresponds to the voltage-based models in Ermentrout’s (1998) classification.*_{ij}*The assumption, made in Hopfield (1984) , is that the postsynaptic potential has the same shape no matter which presynaptic population caused it, the sign and amplitude may vary though. This leads to the relation*

**The voltage-based model.***g*

_{i}represents the unweighted shape (called a g-shape) of the postsynaptic potentials and

_{}is the strength of the postsynaptic potentials elicited by neuron

*j*on neuron

*i*. At this stage of the discussion, these weights are supposed to be deterministic. This is reflected in the notation

_{}which indicates an average value

^{ 1 }. From Eq.ÃÂ 1 we have

So far we have only considered the synaptic inputs to the neurons. We enrich our model by assuming that the neuron

*i*receives also an external current density composed of a deterministic part, noted*I*(_{i}*t*), and a stochastic part, noted*n*(_{i}*t*), so thatWe assume, and this is essential for deriving the mean-field equations below, that all indexed quantities depend only upon the

*P*populations of neurons (see Definition 1), i.e.,where

*x*ÃÂ ∼ÃÂ*y*indicates that the two random variables*x*and*y*have the same probability distribution. In other words, all neurons in the same population are described by identical equations (in law).The g-shapes describe the shape of the postsynaptic potentials and can reasonably well be approximated by smooth functions.

In detail we assume that

*g*_{α}, αÃÂ =ÃÂ 1,…,*P*is the Green function of a linear differential equation of order*k*, i.e., satisfieswhere δ(

*t*) is the Dirac delta function.The functions

*b*(_{lα}*t*),*l*ÃÂ =ÃÂ 0,…,*k*, αÃÂ =ÃÂ 1,…,*P*, are assumed to be continuous. We also assume for simplicity thatfor all

*t*ÃÂ ∈ÃÂ We note_{}the corresponding differential operator:Applying

_{}to both sides of Eq.ÃÂ 3, using Eq.ÃÂ 7 and the fact that ν*(*_{j}*s*)ÃÂ =ÃÂ*S*(_{j}*V*(_{j}*s*)), we obtain a*k*th-order differential equation for*V*_{i}With a slight abuse of notation, we split the sum with respect to

*j*into*P*sums:We classically turn the

*k*th-order differential Eq.ÃÂ 8 into a*k*-dimensional system of coupled first-order differential equations (we divided both sides of the last equation by*c*, see Eq.ÃÂ 6):_{i}A well-known example of g-shapes, see Section “Example II: The model of Jansen and Rit” below or Gerstner and Kistler (2002)
,ÃÂ is

where

*Y*(*t*) is the Heaviside function. This is an exponentially decaying postsynaptic potential corresponding toin Eq.ÃÂ 5.

Another well-known example is

This is a somewhat smoother function corresponding to

in Eq.ÃÂ 5.

**. We modify the Eq.ÃÂ 9 by perturbing the first**

*The dynamics**k*ÃÂ −ÃÂ 1 equations with Brownian noise and assuming that

*n*(

_{i}*t*) is white noise. This has the effect that the quantities that appear in Eq.ÃÂ 9 are not anymore the derivatives up to order

*k*ÃÂ −ÃÂ 1 of

*V*. This becomes true again only in the limit where the added Brownian noise is null. This may seem artificial at first glance but (1) it is a technical assumption that is necessary in the proofs of the well-posedness of the mean-field equations, see Assumption 1 below, and (2) it generates a rich class of external stochastic input, as shown below. With this in mind, the Eq.ÃÂ 9 now read

_{i}W

*(*_{li}*t*),*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1,*i*ÃÂ =ÃÂ 1,…,*N*, are*kN*independent standard Brownian processes. Because we want the neurons in the same class to be essentially identical we also assume that the functions*f*(_{li}*t*) that control the amount of noise on each derivative satisfy*f*(

_{li}*t*)ÃÂ =ÃÂ

*f*

_{lp(i)}(

*t*),

*l*ÃÂ =ÃÂ 0,…,

*k*ÃÂ −ÃÂ 1,

*i*ÃÂ =ÃÂ 1,…,

*N*

Note that in the limit

*f*(_{lα}*t*)ÃÂ =ÃÂ 0 for*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1 and αÃÂ =ÃÂ 1,…,*P*, the components*V*(_{li}*t*) of the vector_{}are the derivatives of the membrane potential*V*, for_{i}*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1 and the Eq.ÃÂ 12 turn into Eq.ÃÂ 9. The system of differential Eq.ÃÂ 12 implies that the class of admissible external stochastic input*n*(_{i}*t*) to the neuron*i*are Brownian noise integrated through the filter of the synapse, i.e., involving the*l*th primitives of the Brownian motion for*l*ÃÂ ≤ÃÂ*k*.We now introduce the

*k*ÃÂ −ÃÂ 1*N*-dimensional vectors*V*(_{l}*t*)ÃÂ = [*V*_{l1},…,*V*]_{lN}*,*^{T}*l*ÃÂ =ÃÂ 1,…,*k*ÃÂ −ÃÂ 1 of the*l*th-order derivative (in the limit of*f*(_{lp(i)}*t*)ÃÂ =ÃÂ 0) of**V**(*t*), and concatenate them with**V**(*t*) into the*Nk*-dimensional vectorThe

*N*-neurons network is described by the*Nk*-dimensional vector_{}. By definition the*l*th*N*-dimensional component_{}of is equal to**V***. In the limit*_{l}*f*(_{lα}*t*)ÃÂ =ÃÂ 0 we haveWe next write the equations governing the time variation of the

*k**N*-dimensional sub-vectors of , i.e., the derivatives of order 0,…,*k*ÃÂ −ÃÂ 1 of_{}. These are vector versions of Eq.ÃÂ 12. We write

*F*_{l}(

*t*) is the

*N*ÃÂ ×ÃÂ

*N*diagonal matrix

diag

_{}where

*f*_{lα}(*t*), αÃÂ =ÃÂ 1,…,*P*is repeated*N*_{α}times, and the*W*_{l}(*t*),*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 2, are*k*ÃÂ −ÃÂ 1*N*-dimensional independent standard Brownian processes.The equation governing the (

*k*ÃÂ −ÃÂ 1)th differential of the membrane potential has a linear part determined by the differential operators_{}, αÃÂ =ÃÂ 1,…,*P*and accounts for the external inputs (deterministic and stochastic) and the activity of the neighbors. We note_{}the*N*ÃÂ ×ÃÂ*Nk*matrix describing the relation between the neurons membrane potentials and their derivatives up to the order*k*ÃÂ −ÃÂ 1 and the (*k*ÃÂ −ÃÂ 1)th derivative of**V**. This matrix is defined as the concatenation of the*k**N*ÃÂ ×ÃÂ N diagonal matrixes**B**

_{l}(

*t*)ÃÂ =ÃÂ diag

_{}

for

*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1:We have:

where

**W**_{k−1}(*t*) is an*N*-dimensional standard Brownian process independent of**W**_{l}(*t*),*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 2. The coordinates of the*N*-dimensional vector**I**(*t*) are the external deterministic input currents,the

*N*ÃÂ ×ÃÂ*N*matrix of the weights_{}which are equal to_{}(see Eq.ÃÂ 4), and*S*is a mapping from*to*^{N}*such that*^{N}We define

where Id

_{N}is the*N*ÃÂ ×ÃÂ*N*identity matrix and 0_{NÃÂ ×ÃÂ N}the*N*ÃÂ ×ÃÂ*N*null matrix. We also define the two*kN*-dimensional vectors:where 0

_{N}is the*N*-dimensional null vector.Combining Eqs.ÃÂ 14 and 15 the full equation satisfied by can be written:

where the

*kN*ÃÂ ×ÃÂ*kN*matrix*(***F***t*) is equal to diag(**F**_{0},…,**F**_{k−1}) and*is an***W**_{t}*kN*-dimensional standard Brownian process.### The Mean-Field Equations

One of the central goals of this paper is to analyze what happens when we let the total number

*N*of neurons grow to infinity. Can we “summarize” the*kN*equations (Eq.ÃÂ 17) with a smaller number of equations that would account for the populations activity? We show that the answer to this question is yes and that the populations activity can indeed be represented by*P*stochastic differential equations of order*k*. Despite the fact that their solutions are Gaussian processes, these equations turn out to be quite complicated because these processes are non-Markovian.We assume that the proportions of neurons in each population are nontrivial, i.e.:

If it were not the case the corresponding population would not affect the global behavior of the system, would not contribute to the mean-field equation, and could be neglected.

#### General derivation of the mean-field equation

When investigating the structure of such mesoscopic neural assemblies as cortical columns, experimentalists are able to provide the average value

_{}of the synaptic efficacy*J*of neural population_{ij}*j*to population*i*. These values are obviously subject to some uncertainty which can be modeled as Gaussian random variables. We also impose that the distribution of the*J*s depends only on the population pair αÃÂ =ÃÂ_{ij}*p*(*i*), βÃÂ =ÃÂ*p*(*j*), and on the total number of neurons*N*_{β}of population β:We also make the additional assumption that the

*J*’s are independent. This is a reasonable assumption as far as modeling cortical columns from experimental data is concerned. Indeed, it is already difficult for experimentalists to provide the average value of the synaptic strength_{ij}_{}from population β to population α and to estimate the corresponding error bars (σ_{αβ}), but measuring synaptic efficacies correlations in a large assembly of neurons seems currently out of reach. Though, it is known that synaptic weights are indeed correlated (e.g., via synaptic plasticity mechanisms), these correlations are built by dynamics via a complex interwoven evolution between neurons and synapses dynamics and postulating the form of synaptic weights correlations requires, on theoretical grounds, a detailed investigation of the whole history of neurons–synapses dynamics.Let us now discuss the scaling form of the probability distribution (Eq.ÃÂ 18) of the

*J*’s, namely the division by_{ij}*N*_{β}for the mean and variance of the Gaussian distribution. This scaling ensures that the “local interaction field”_{}summarizing the effects of the neurons in population β on neuron*i*, has a mean and variance which do not depend on*N*_{β}and is only controlled by the phenomenological parameters_{}, σ_{αβ}.We are interested in the limit law when

*N*ÃÂ →ÃÂ ∞ of the*N*-dimensional vector**V**defined in Eq.ÃÂ 3 under the joint law of the connectivities and the Brownian motions, which we call the mean-field limit. This law can be described by a set of*P*equations, the mean-field equations. As mentioned in the introduction these equations can be derived in several ways, either heuristically as in the work of Amari (Amari, 1972 ; Amari et al., 1977 ), Sompolinsky (Crisanti et al., 1990 ; Sompolinsky et al., 1998 ), and Cessac (Cessac, 1995 ; Samuelides and Cessac, 2007 ), or rigorously as in the work of Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995 , 1997 ; Guionnet, 1997 ) . We derive them here in a pedestrian way, prove that they are well-posed, and provide an algorithm for computing their solution.The effective description of the network population by population is possible because the neurons in each population are interchangeable, i.e., have the same probability distribution under the joint law of the multidimensional Brownian motion and the connectivity weights. This is the case because of the relations (Eqs.ÃÂ 4ÃÂ andÃÂ 16) which imply the form of Eq.ÃÂ 17.

**Before diving into the mathematical developments let us comment briefly what are the basic ideas and conclusions of the mean-field approach. Following Eq.ÃÂ 8, the evolution of the membrane potential of some neuron**

*The mean ideas of dynamic mean-field equations.**i*in population α is given by:

Using the assumption that

*S*,_{i}*I*,_{i}*n*depend only on neuron population, this gives:_{i}where we have introduced the local interaction field η

_{iβ}(*V*(*t*))ÃÂ =ÃÂ_{}, summarizing the effects of neurons in population β on neuron*i*and whose probability distribution only depends on the pre- and postsynaptic populations α and β.In the simplest situation where the

*J*’s have no fluctuations (σ_{ij}_{αβ}ÃÂ =ÃÂ 0) this field reads_{}. The term Φ_{β}(*V*(*t*))ÃÂ =ÃÂ_{}is the frequency rate of neurons in population β, averaged over this population. Introducing in the same way the average membrane potential in population β,_{}, one obtains:This equation resembles very much Eq.ÃÂ 19 if one makes the following reasoning: “Since Φ

_{β}(*V*(*t*) is the frequency rate of neurons in population β, averaged over this population, and since, for one neuron, the frequency rate is ν*(*_{i}*t*)ÃÂ =ÃÂ*S*(_{i}*V*(_{i}*t*)) let us write Φ_{β}(*V*(*t*))ÃÂ =ÃÂ*S*_{β}(*V*_{β}(*t*))”. This leads to:which has exactly the same form as Eq.ÃÂ 19 but at the level of a neuron population. Equations such as (22), which are obtained via a very strong assumption:

are typically those obtained by Jansen and Rit (1995)
. Surprisingly, they are correct and can be rigorously derived, as discussed below, provided σ

_{αβ}ÃÂ =ÃÂ 0.However, they cannot remain true, as soon as the synaptic weights fluctuate. Indeed, the transition from Eqs.ÃÂ 19 to 22 corresponds to a projection from a

*NP*-dimensional space to a*P*-dimensional one, which holds because the*NP*ÃÂ ×ÃÂ*NP*dimensional synaptic weights matrix has in fact only*P*linearly independent rows. This does not hold anymore if the*J*’s are random and the synaptic weights matrix has generically full rank. Moreover, the effects of the nonlinear dynamics on the synaptic weights variations about their mean, is not small even if the σ_{ij}_{αβ}s are and the real trajectories of Eq.ÃÂ 19 can depart strongly from the trajectories of Eq.ÃÂ 22. This is the main message of this paper.To finish this qualitative description, let us say in a few words what happens to the mean-field equations when σ

_{αβ}ÃÂ ≠ÃÂ 0. We show below that the local interaction fields η_{αβ}(*V*(*t*)) becomes, in the limit*N*_{β}ÃÂ →ÃÂ ∞, a time-dependent Gaussian field*U*_{αβ}(*t*). One of the main results is that this field is non-Markovian, i.e., it integrates the whole history, via the synaptic responses*g*which are convolution products. Despite the fact that the evolution equation for the membrane potential averaged over a population writes in a very simple form:it hides a real difficulty, since

*U*_{αβ}(*t*) depends on the whole past. Therefore,*the introduction of synaptic weights variability leads to a drastic change in neural mass models, as we now develop*.*We note*

**The Mean-Field equations.**_{}(respectively

_{}the set of continuous functions from the real interval [

*t*

_{0},

*T*] (respectively

_{}. By assigning a probability to subsets of such functions, a continuous stochastic process

*X*defines a positive measure of unit mass on

_{}(respectively

_{}. This set of positive measures of unit mass is noted

_{}(respectively

_{}.

We now define a process of particular importance for describing the limit process: the effective interaction process.

**Definition 2.**(Effective interaction process). Let

*X*ÃÂ ∈ÃÂ

_{}(respectively

_{}be a given Gaussian stochastic process. The effective interaction term is the Gaussian process

**U**

^{X}ÃÂ ∈ÃÂ

_{}, (respectively

_{}) defined by:

where

and

In order to construct the solution of the mean-field equations (see Section “Existence and Uniqueness of Solutions in Finite Time”) we will need more explicit expressions for

_{}and_{}which we obtain in the next proposition.**Proposition 1.**Let

_{}be the mean of the process

*X*ÃÂ and

_{}be its covariance matrix. The vectors

**m**

^{X}(

*t*) and Δ

*(*

^{X}*t*,

*s*) that appear in the definition of the effective interaction process

**U**

^{X}are defined by the following expressions:

and

where

is the probability density of a 0-mean, unit variance, Gaussian variable.

*Proof.*The results follow immediately by a change of variable from the fact that

*X*

_{β}(

*t*) is a univariate Gaussian random variable of mean μ

_{β}(

*t*) and variance

*C*

_{ββ}(

*t*,

*t*) and the pair (

*X*

_{β}(

*t*),

*X*

_{β}(

*s*)) is bivariate Gaussian random variable with mean (μ

_{β}(

*t*), μ

_{β}(

*s*)) and covariance matrix

Choose

*P*neurons*i*_{1},…,*i*, one in each population (neuron_{P}*i*_{α}belongs to the population α). We define the*kP*-dimensional vector by choosing, in each of the*k N*-dimensional components_{}, of the vector_{}defined in Eq.ÃÂ 13 the coordinates of indexes*i*_{1},…,*i*. Then it can be shown, using either a heuristic argument or large deviations techniques (see Appendix A), that the sequence of_{P}*kP*-dimensional processes_{}converges in law to the process_{}solution of the following mean-field equation:**L**is the

*P*ÃÂ ×ÃÂ

_{k}*P*matrix

_{k}The

*P*ÃÂ ×ÃÂ*P*matrixes**B**_{l}(*t*),*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1 are, with a slight abuse of notations, equal to diag(*b*_{l1}(*t*),…,*b*(_{lP}*t*))._{}is a*kP*-dimensional standard Brownian process. has the law of the*P*-dimensional effective interaction vector associated to the vector (first*P*-dimensional component of ) and is statistically independent of the external noise_{}and of the initial condition_{}(when*t*_{0}ÃÂ >ÃÂ −∞):We have used for the matrixes

**F**_{l}(*t*),*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1 the same abuse of notations as for the matrixes**B**_{l}(*t*), i.e.,*(***F**_{l}*t*)ÃÂ =ÃÂ diag(*f*_{l1}(*t*),…,*f*(_{lP}*t*)) for*l*ÃÂ =ÃÂ 0,…,*k*ÃÂ −ÃÂ 1.**I**(*t*) is the*P*-dimensional external current [*I*_{1}(*t*),…,*I*(_{P}*t*)]^{T}.The process

_{}is a*P*ÃÂ ×ÃÂ*P*-dimensional process and is applied, as a matrix, to the*P*-dimensional vector**1**with all coordinates equal to 1, resulting in the*P*-dimensional vector_{}whose mean and covariance function can be readily obtained from Definition 2:and

We have of course

Equations (28) are formally very similar to Eq.ÃÂ 17 but there are some very important differences. The first ones are of dimension

*kP*whereas the second are of dimension*kN*which grows arbitrarily large when*N*ÃÂ →ÃÂ ∞. The interaction term of the second,_{}is simply the synaptic weight matrix applied to the activities of the*N*neurons at time*t*. The interaction term of the first equation,_{}, though innocuous looking, is in fact quite complex (see Eqs.ÃÂ 29ÃÂ andÃÂ 30). In fact the stochastic process_{}, putative solution of Eq.ÃÂ 28, is in general non-Markovian.To proceed further we formally integrate the equation using the flow, or resolvent, of the Eq.ÃÂ 28, noted Φ

*(*_{L}*t*,*t*_{0}) (see AppendixÃÂ B), and we obtain, since we assumed**L**continuous, an implicit representation of_{}:We now introduce for future reference a simpler model which is quite frequently used in the description on neural networks and has been formally analyzed by Sompolinsky and colleagues (Crisanti et al., 1990
; Sompolinsky et al., 1998
) in the case of one population (

*P*ÃÂ =ÃÂ 1).#### Example I: The Simple Model

In the Simple Model, each neuron membrane potential decreases exponentially to its rest value if it receives no input, with a time constant τ

_{α}depending only on the population. In other words, we assume that the g-shape describing the shape of the PSPs is Eq.ÃÂ 10, with*K*ÃÂ =ÃÂ 1 for simplicity. The noise is modeled by an independent Brownian process per neuron whose standard deviation is the same for all neurons belonging to a given population.Hence the dynamics of a given neuron

*i*from population α of the network reads:This is a special case of Eq.ÃÂ 12 where

*k*ÃÂ =ÃÂ 1,*b*_{0α}(*t*)ÃÂ =ÃÂ 1/τ_{α},*b*_{1α}(*t*)ÃÂ =ÃÂ 1 for αÃÂ =ÃÂ 1,…,*P*. The corresponding mean-field equation reads:where the processes (

*W*_{α}(*t*))_{tÃÂ ≥ÃÂ t0}are independent standard Brownian motions,_{}is the effective interaction term, see Definition 2. This is a special case of Eq.ÃÂ 28 with**L**ÃÂ = diag(_{}), and*ÃÂ =ÃÂ diag(***F***f*_{1},…,*f*)._{P}Taking the expected value of both sides of Eq.ÃÂ 33 and using we obtain Eq.ÃÂ 26 that the mean μ

_{α}(*t*) of_{α}(*t*) satisfies the differential equationIf

*C*_{ββ}(*t*,*t*) vanishes for all*t*ÃÂ ≥ÃÂ*t*_{0}this equation reduces to:which is precisely the “naive” mean-field equation (Eq.ÃÂ 22) obtained with the assumption (Eq.ÃÂ 23). We see that Eq.ÃÂ 22 are indeed correct, provided that

*C*_{ββ}(*t*,*t*)ÃÂ =ÃÂ 0, ∀*t*ÃÂ ≥ÃÂ*t*_{0}.EquationÃÂ 33 can be formally integrated implicitly and we obtain the following integral representation of the process

_{α}(*t*):where

*t*_{0}is the initial time. It is an implicit equation on the probability distribution of (*t*), a special case of (Eq.ÃÂ 31), with_{}The variance

*C*_{αα}(*t*,*t*) of_{α}(*t*) can easily be obtained from Eq.ÃÂ 34. It readswhere Δ

_{β}(*u*,*v*) is given by Eq.ÃÂ 27.If σ

_{αβ}ÃÂ =ÃÂ 0 and if*s*_{α}ÃÂ =ÃÂ 0 then*C*_{αα}(*t*,*t*)ÃÂ =ÃÂ 0, ∀*t*ÃÂ ≥ÃÂ*t*_{0}is a solution of this equation. Thus, mean-field equations for the simple model reduce to the naive mean-field Eq.ÃÂ 22 in this case. This conclusion extends as well to all models of synaptic responses, ruled by Eq.ÃÂ 5.However, the equation of

*C*_{αα}(*t*,*t*) shows that, in the general case, in order to solve the differential equation for μ_{α}(*t*), we need to know the whole past of the process . This exemplifies a previous statement on the non-Markovian nature of the solution of the mean-field equations.#### Example II: The model of Jansen and Rit

One of the motivations of this study is to characterize the global behavior of an assembly of neurons in particular to get a better understanding of recordings of cortical signals like EEG or MEG. One of the classical models of neural masses is Jansen and Rit’s mass model (Jansen and Rit, 1995
), in short the JR model (see Figure 2
).

**FigureÃÂ 2.**

**(A)**Neural mass model: a population of pyramidal cells interacts with itself in an excitatory mode and with an inhibitory population of inter-neurons.

**(B)**Block representation of the model. The

*g*boxes account for the synaptic integration between neuronal populations.

*S*boxes simulate cell bodies of neurons by transforming the membrane potential of a population into an output firing rate. The coefficients

*J*

_{αβ}are the random synaptic efficiency of population β on population α (1 represents the pyramidal population, 2 the excitatory feedback, and 3 the inhibitory inter-neurons).

The model features a population of pyramidal neurons that receives inhibitory inputs from local inter-neurons, excitatory feedbacks, and excitatory inputs from neighboring cortical units and sub-cortical structures such as the thalamus. The excitatory input is represented by an external firing rate that has a deterministic part

*I*_{1}(*t*) accounting for specific activity of other cortical units and a stochastic part*n*_{1}(*t*) accounting for a non specific background activity. We formally consider that the excitatory feedback of the pyramidal neurons is a new neural population, making the number*P*of populations equal to 3. We also represent the external inputs to the other two populations by the sum of a deterministic part*I*(_{j}*t*) and a stochastic part*n*(_{j}*t*),*j*ÃÂ =ÃÂ 2, 3, see Figure 2 .In the model introduced originally by Jansen and Rit, the connectivity weights were assumed to be constant, i.e., equal to their mean value. Nevertheless, there exists a variability of these coefficients, and as we show in the sequel, the effect of the connectivity variability impacts the solution at the level of the neural mass. Statistical properties of the connectivities have been studied in details for instance in (Braitenberg and Schüz, 1998
).

We consider a network of

*N*neurons,*N*_{α}, αÃÂ =ÃÂ 1, 2, 3 belonging to population α. We index by 1 (respectively 2, and 3) the pyramidal (respectively excitatory feedback, inhibitory interneuron) populations. We choose in each population a particular neuron indexed by*i*_{α}, αÃÂ =ÃÂ 1, 2, 3. The evolution equations of the network can be written for instance in terms of the potentials*V*_{i1},*V*_{i2}and*V*_{i3}labeled in Figure 2 and these equations read:In the mean-field limit, denoting by

_{α}, αÃÂ =ÃÂ 1, 2, 3 the average membrance potential of each class, we obtain the following equations:where

_{}is the effective interaction process associated with this problem, i.e., a Gaussian process of mean:All other mean values correspond to the non-interacting populations and are equal to 0. The covariance matrix can be deduced from Eq.ÃÂ 25:

where

This model is a voltage-based model in the sense of Ermentrout (1998)
. Let us now instantiate the synaptic dynamics and compare the mean-field equations with Jansen’s population equations

^{ 2 }(sometimes improperly called also mean-field equations).The simplest model of synaptic integration is a first-order integration, which yields exponentially decaying postsynaptic potentials:

Note that this is exactly Eq.ÃÂ 10. The corresponding g-shape satisfies the following first-order differential equation

In this equation τ is the time constant of the synaptic integration and

*K*the synaptic efficiency. The coefficients*K*and τ are the same for the pyramidal and the excitatory feedback population (characteristic of the pyramidal neurons and defining the g-shape*g*_{1}), and different for the inhibitory population (defining the g-shape*g*_{3}). In the pyramidal or excitatory (respectively the inhibitory) case we have*K*ÃÂ =ÃÂ*K*_{1}, τÃÂ =ÃÂ τ_{1}(respectively*K*ÃÂ =ÃÂ*K*_{3}, τÃÂ =ÃÂ τ_{3}). Finally, the sigmoid functions*S*is given bywhere ν

_{max}is the maximum firing rate, and*v*_{0}is a voltage reference.With this synaptic dynamics we obtain the first-order Jansen and Rit’s equation:

The “original” Jansen and Rit’s equation (Grimbert and Faugeras, 2006
; Jansen and Rit, 1995
) amount considering only the mean of the process and assuming that

_{}for*i*,*j*ÃÂ ∈ÃÂ {1, 2, 3}, i.e., that the expectation commutes with the sigmoidal function*S*. This is a very strong assumption, and that the fluctuations of the solutions of the mean-field equation around the mean imply that the sigmoid cannot be considered as linear in the general case.A higher order model was introduced by van Rotterdam et al. (1982)
to better account for the synaptic integration and to better reproduce the characteristics of real postsynaptic potentials. In this model the g-shapes satisfy a second-order differential equation:

We recognize the g-shape defined by Eq.ÃÂ 11 solution of the second-order differential equation

_{}With this type of synaptic integration, we obtain the following mean-field equations:Here again, going from the mean-field Eq.ÃÂ 37 to the original Jansen and Rit’s neural mass model consists in studying the equation of the mean of the process given by Eq.ÃÂ 37 and commuting the sigmoidal function with the expectation.

Note that the introduction of higher order synaptic integrations results in richer behaviors. For instance, Grimbert and Faugeras (2006)
showed that some bifurcations can appear in the second-order JR model giving rise to epileptic like oscillations and alpha activity, that do not appear in the first-order model.

The mean-field equation (Eq.ÃÂ 31) is an implicit equation of the stochastic process (

*V*(*t*))_{tÃÂ ≥ÃÂ t0}. We prove in this section that under some mild assumptions this implicit equation has a unique solution. These assumptions are the following.**Assumption 1.**

(a) The matrix

**L**(*t*) is*C*^{0}and satisfies ‖**L**(*t*)‖ÃÂ ≤ÃÂ*k*for all_{L}*t*in [*t*_{0},ÃÂ*T*], for some matrix norm ‖ ‖ and some strictly positive constantÃÂ*k*._{L}(b) The matrix

*(***F***t*) has all its singular values lowerbounded (respectively upperbounded) by the strictly positive constant^{ 3 }_{}(respectively_{}) for all*t*in [*t*_{0},*T*].(c) The deterministic external input vector

**I**(*t*) is bounded and we have ‖**I**(*t*)‖_{∞}ÃÂ ≤ÃÂ*I*_{max}for all*t*in [*t*_{0},*T*] and some strictly positive constant*I*_{max}.This solution is the fixed point in the set

_{}of*kP*-dimensional processes of an equation that we will define from the mean-field equations. We will construct a sequence of Gaussian processes and prove that it converges in distribution toward this fixed point.We first recall some results on the convergence of random variables and stochastic processes.

### Convergence of Gaussian Processes

We recall the following result from Bogachev (1998)
which formalizes the intuition that a sequence of Gaussian processes converges toward a Gaussian process if and only if the means and covariance functions converge. In fact in order for this to be true, it is only necessary to add one more condition, namely that the corresponding sequence of measures (elements of

_{}) do not have “any mass at infinity”. This property is called uniform tightness (Billingsley, 1999 ). More precisely we have**Definition 3.**(Uniform tightness). Let

_{}be a sequence of

*kP*-dimensional processes defined on [

*t*

_{0},

*T*] and

*P*be the associated elements of

_{n}_{}. The sequence

_{}is called uniformly tight if and only if for all εÃÂ >ÃÂ 0 there exists a compact set

*K*of

_{}such that

*P*(

_{n}*K*)ÃÂ >ÃÂ 1ÃÂ −ÃÂ ε,

*n*ÃÂ ≥ÃÂ 1.

**Theorem 1.**Let

_{}

*be a sequence of kP-dimensional Gaussian processes defined on*[

*t*

_{0},

*T*]

*or on an unbounded interval*

^{ 4 }of .

*The sequence converges to a Gaussian process X if and only if the following three conditions are satisfied*:

•

*The sequence*_{}*is uniformly tight*.*• The sequence*μ

*(*

^{n}*t*)

*of the mean functions converges for the uniform norm*.

*• The sequence*

*C*.

^{n}of the covariance operators converges for the uniform normWe now, as advertised, define such a sequence of Gaussian processes.

Let us fix

*Z*_{0}, a*kP*-dimensional Gaussian random variable, independent of the Brownian and of the process ((*X*)*)*_{t}*t*ÃÂ ∈ÃÂ [*t*_{0},*T*].**Definition 4.**Let

*X*be an element of

_{}and

_{}beÃÂ the function

_{}such that

where

_{}and_{}are defined^{ 5 }in Section “Mean-Field Equations for Multi-Populations Neural Network Models”.Note that, by Definition 2 the random process (

_{}(*X*))_{tÃÂ ∈ [t0,ÃÂ T]},*k*ÃÂ ≥ÃÂ 1 is the sum of a deterministic function (defined by the external current) and three independent random processes defined by*Z*_{0}, the interaction between neurons, and the external noise. These three processes being Gaussian processes, so is (_{}(*X*))_{tÃÂ ∈ÃÂ [t0,ÃÂ T]}. Also note that (_{}(*X*))_{t0}ÃÂ =ÃÂ*Z*_{0}. It should be clear that a solution of the mean-field equation (Eq.ÃÂ 31) satisfies (*t*_{0})ÃÂ =ÃÂ*Z*_{0}and is a fixed point of_{}, i.e.,_{}()*t*ÃÂ =ÃÂ (*t*).Let

*X*be a given stochastic process of_{}such that_{}(hence_{}is independent of the Brownian). We define the sequence of Gaussian processes_{}by:In the remaining of this section we show that the sequence of processes

_{}converges in distribution toward the unique fixed-point*Y*of_{}which is also the unique solution of the mean-field equation (Eq.ÃÂ 31).### Existence and uniqueness of a solution for the mean-field equations

The following upper and lower bounds are used in the sequel.

**Lemma 1.**Consider the Gaussian process

_{}

**U**

^{X}is defined in Sections “The Mean-Field Equations” and “Introduction” is the

*P*dimensional vector with all coordinates equal to 1. We have

for all

*t*_{0}ÃÂ ≤ÃÂ*t*ÃÂ ≤ÃÂ*T*. The maximum eigenvalue of its covariance matrix is upperbounded by_{}where ‖*S*_{β}‖_{∞}is the supremum of the absolute value of*S*_{β}. We also note_{}*Proof.*The proof is straightforward from Definition 4.

The proof of existence and uniqueness of solution, and of the convergence of the sequence (Eq.ÃÂ 38) is in two main steps. We first prove that the sequence of Gaussian processes

_{},*k*ÃÂ ≥ÃÂ 1 is uniformly tight by proving that it satisfies Kolmogorov’s criterion for tightness. This takes care of condition 1 in Theorem 1. We then prove that the sequences of the mean functions and covariance operators are Cauchy sequences for the uniform norms, taking care of conditions 2 and 3.#### Uniform tightness

We first recall the following theorem due to Kolmogorov (Kushner, 1984
, Chapter 4.1).

**Theorem 2.**(Kolmogorov’s criterion for tightness).

*Let*

_{}

*be a sequence of kP-dimensional processes defined on*[

*t*

_{0},

*T*].

*If there exist*α, β,

*C*ÃÂ >ÃÂ 0

*such that*

*then the sequence is uniformly tight*.

Using this theorem we prove that the sequence

_{},*k*ÃÂ ≥ÃÂ 1 satisfies Kolmogorov’s criterion for βÃÂ =ÃÂ 4 and αÃÂ ≥ÃÂ 1. The reason for choosing βÃÂ =ÃÂ 4 is that, heuristically,*dW*ÃÂ .ÃÂ (*dt*)^{1/2}. Therefore in order to upperbound_{}by a power of |*t*ÃÂ −ÃÂ*s*|ÃÂ ≥ÃÂ 2 (hence strictly larger than 1) we need to raise_{}to a power at least equal to 4. The proof itself is technical and uses standard inequalities (Cauchy–Schwarz’s and Jensen’s), properties of Gaussian integrals, elementary properties of the stochastic integral, and Lemma 1. It also uses the fact that the input currentÃÂ is bounded, i.e., that_{}this is Assumption (c) in 1.**Theorem 3.**

*The sequence of processes*

_{},

*k*ÃÂ ≥ÃÂ 1

*is uniformly tight*.

*Proof.*We do the proof for

*k*ÃÂ =ÃÂ 1, the case

*k*ÃÂ >ÃÂ 1 is similar. If we assume that

*n*ÃÂ ≥ÃÂ 1 and

*s*ÃÂ <ÃÂ

*t*we can rewrite the difference

_{}as follows, using property (i) in PropositionÃÂ B.1 in Appendix B.

The righthand side is the sum of seven terms and therefore (Cauchy–Schwarz inequality):

Because ‖Φ

_{L}(*t*,*t*_{0})ÃÂ −ÃÂ Φ_{L}(*s*,*t*_{0})‖ÃÂ ≤ÃÂ |*t*ÃÂ −ÃÂ*s*| ‖**L**‖we see that all terms in the righthand side of the inequality but the second one involving the Brownian motion are of the order of (*t*ÃÂ −ÃÂ*s*)^{2}. We raise again both sides to the second power, use the Cauchy–Schwarz inequality, and take the expected value:Remember that

_{}is a*P*-dimensional diagonal Gaussian process, noted**Y**_{u}in the sequel, therefore:The second-order moments are upperbounded by some regular function of μ and σ

_{max}(defined in Lemma 1) and, because of the properties of Gaussian integrals, so are the fourth-order moments.We now define

**B**(*u*)ÃÂ =ÃÂ Φ*(*_{L}*s*,*u*)*(***F***u*) and evaluate_{}We haveBecause

_{}is by construction independent of_{}if*i*ÃÂ ≠ÃÂ*j*and_{}for all*i*,*j*(property of the It integral), the last term is the sum of only three types of terms:1. If

*j*_{1}ÃÂ =ÃÂ*k*_{1}ÃÂ =ÃÂ*j*_{2}ÃÂ =ÃÂ*k*_{2}we defineand, using Cauchy–Schwarz:

2. If

*j*_{1}ÃÂ =ÃÂ*k*_{1}and*j*_{2}ÃÂ =ÃÂ*k*_{2}but 1ÃÂ ≠ÃÂ*j*_{2}we definewhich is equal, because of the independence of

_{}and_{}to3. Finally, if

*j*_{1}ÃÂ =ÃÂ*j*_{2}and*k*_{1}ÃÂ =ÃÂ*k*_{2}but*j*_{1}ÃÂ ≠ÃÂ*k*_{1}we definewhich is equal, because of the independence of

_{}and_{}tobecause of the properties of the stochastic integral,

_{}hence, because of the properties of the Gaussian integralsfor some positive constant

*k*. This takes care of the terms of the form*T*_{1}. Next we havewhich takes care of the terms of the form

*T*_{2}. Finally we have, because of the properties of the It integralwhich takes care of the terms of the form

*T*_{3}.This shows that the term

_{}in Eq.ÃÂ 40 is of the order of (*t*ÃÂ −ÃÂ*s*)^{1+a}where aÃÂ ≥ÃÂ 1. Therefore we havefor all

*s*,*t*in [*t*_{0},*T*], where*C*is a constant independent of*t*,*s*. According to Kolmogorov criterion for tightness, the sequence of processes_{}is uniformly tight.The proof for

_{},*k*ÃÂ >ÃÂ 1 is similar.#### The mean and covariance sequences are Cauchy sequences

Let us note μ

*(*^{n}*t*) [respectively*C*(^{n}*t*,*s*)] the mean (respectively the covariance matrix) function of*X*=ÃÂ_{n}ÃÂ_{}(*X*_{n−1}),*nÃÂ*≥ÃÂ 1. We have:where

_{}is given by Eq.ÃÂ 26. Similarly we haveNote that the

*kP*ÃÂ ×ÃÂ*kP*covariance matrix_{}has only one nonzero*P*ÃÂ ×ÃÂ*P*block:According to Definition 2 we have

where

_{}is given by Eq.ÃÂ 27 and*Dx*is defined in PropositionÃÂ 1.In order to prove our main result, that the two sequences of functions (μ

*) and (*^{n}*C*) are uniformly convergent, we require the following four lemmas that we state without proofs, the proofs being found in Appendixes E–H. The first lemma gives a uniform (i.e., independent of^{n}*n*ÃÂ ≥ÃÂ 2 and αÃÂ =ÃÂ 1,…,*kP*) strictly positive lowerbound for_{}In what follows we use the following notation: Let*C*be a symmetric positive definite matrix, we note_{}its smallest eigenvalue.**Lemma 2.**The following uppperbounds are valid for all

*n*ÃÂ ≥ÃÂ 1 and all

*s*,

*t*ÃÂ ∈ÃÂ [

*t*

_{0},

*T*].

where μ and σ

_{max}are defined in Lemma 1,_{}is defined in Assumption 1.**Lemma 3.**For all

*t*∈ [

*t*

_{0},

*T*] all αÃÂ =ÃÂ 1,…,

*kP*, and

*n*ÃÂ ≥ÃÂ 1, we have

_{}

where λ

_{min}is the smallest singular value of the positive symmetric definite matrix Φ*(*_{L}*t*,*t*_{0})Φ*(*_{L}*t*,*t*_{0})*for*^{T}*t*ÃÂ ∈ÃÂ [*t*_{0},*T*] and_{}is the smallest eigenvalue of the positive symmetric definite covariance matrix .The second lemma also gives a uniform lowerbound for the expression

_{}which appears in the definition of*C*^{n+1}through Eqs.ÃÂ 43ÃÂ andÃÂ 27. The crucial point is that this function is*O*(|*t*ÃÂ −ÃÂ*s*|) which is central in the proof of Lemma 5.**Lemma 4.**For all αÃÂ =ÃÂ 1,…,

*kP*and

*n*ÃÂ ≥ÃÂ 1 the quantity

_{}is lowerbounded by the positive symmetric function:

where

_{}is the strictly positive lower bound, introduced in Assumption 1, on the singular values of the matrix*(***F***u*) for*u*∈ [*t*_{0},*T*].The third lemma shows that an integral that appears in the proof of the uniform convergence of the sequences of functions (μ

*) and (*^{n}*C*) is upperbounded by the^{n}*n*th term of a convergent series.**Lemma 5.**The 2

*n*-dimensional integral

where the functions ρ

*(*_{i}*u*,_{i}*v*),_{i}*i*ÃÂ =ÃÂ 1,…,*n*are either equal to 1 or to_{}(the function θ is defined in Lemma 4), is upperbounded by*k*/(^{n}*n*ÃÂ −ÃÂ 1)! for some positive constant*k*.With these lemmas in hand we prove Proposition 3. The proof is technical but its idea is very simple. We find upperbounds for the matrix infinite norm of

*C*^{n+1}(*t*,*s*)ÃÂ −ÃÂ*C*^{n}(*t*,*s*) and the infinite norm of μ^{n+1}(*t*)ÃÂ −ÃÂ μ*(*^{n}*t*) by applying the mean value Theorem and LemmasÃÂ 3 and 4 to the these norms. These upperbounds involve integrals of the infinite norms of*C*(^{n}*t*,*s*)ÃÂ −ÃÂ*C*^{n−1}(*t*,*s*) and μ^{n}(*t*)ÃÂ −ÃÂ μ^{n−1}(*t*) and, through Lemma 4, one over the square root of the function θ. Proceeding recursively and using Lemma 5, one easily shows that the infinite norms of*C*^{n+1}ÃÂ −ÃÂ*C*^{n}and μ^{n+1}ÃÂ −ÃÂ μ*are upperbounded by the*^{n}*n*th term of a convergent series from which it follows that the two sequences of functions are Cauchy sequences, hence convergent.**Proposition 3.**The sequences of covariance matrix functions

*C*

^{n}(

*t*,ÃÂ

*s*) and of mean functions μ

^{n}(

*t*),

*s*,

*t*in [

*t*

_{0},

*T*] are Cauchy sequences for the uniform norms.

*Proof.*We have

We take the infinite matrix norm of both sides of this equality and use the upperbounds

_{}and_{}(see Appendix B) to obtain^{ 6 }According to Eq.ÃÂ 27 we are led to consider the difference

*A*ÃÂ −ÃÂ_{n}*A*_{n−1}, where:We write next:

The mean value theorem yields:

Using the fact that

_{}, we obtain:where the constants

*k*and_{L}_{}are defined in Appendix B andA similar process applied to the mean values yields:

where μ is defined in Lemma 1. We now use the mean value Theorem and Lemmas 3 and 4 to find upperbounds for

_{}_{}and_{}We havewhere

*k*_{0}is defined in Lemma 3. Hence:Along the same lines we can show easily that:

and that:

where θ(

*u*,*v*) is defined in Lemma 4. Grouping terms together and using the fact that all integrated functions are positive, we write:Note that, because of Lemma 3, all integrals are well-defined. Regarding the mean functions, we write:

Proceeding recursively until we reach

*C*^{0}and μ^{0}we obtain an upperbound for ‖C^{n+1}(*t*,*s*) −*C*^{n}(*t*,*s*)‖_{∞}(respectively for ‖μ^{n+1}(*t*)ÃÂ −ÃÂ μ^{n}(*t*)‖_{∞}) which is the sum of <5^{n}terms each one being the product of*k*raised to a power ≤*n*, times 2μ_{max}or 2Σ_{max}(upperbounds for the norms of the mean vector and the covariance matrix defined in Lemma 2), times a 2*n*-dimensional integral*I*_{n}given bywhere the functions ρ

*(*_{i}*u*,_{i}*v*),_{i}*i*ÃÂ =ÃÂ 1,…,*n*are either equal to 1 or to_{}. According to Lemma 5, this integral is of the order of some positive constant raised to the power*n*divided by (*n*−ÃÂ 1)!. Hence the sum is less than some positive constant*k*raised to the power*n*divided by (*n*ÃÂ −ÃÂ 1)!. By taking the supremum with respect to*t*and*s*in [*t*_{0},*T*] we obtain the same result for ‖*C*^{n+1}ÃÂ −ÃÂ*C*^{n}‖_{∞}(respectively for ‖μ^{n+1}ÃÂ −ÃÂ μ^{n}‖_{∞}). Since the series_{}is convergent, this implies that ‖*C*^{n+p}ÃÂ −ÃÂ*C*^{n}‖_{∞}(respectively ‖μ^{n+p}ÃÂ −ÃÂ μ^{n}‖_{∞}) can be made arbitrarily small for large*n*and*p*and the sequence*C*^{n}(respectively μ*) is a Cauchy sequence.*^{n}#### Existence and uniqueness of a solution of the mean-field equations

It is now easy to prove our main result, that the mean-field equations (Eq.ÃÂ 31) or equivalently (Eq.ÃÂ 28) are well-posed, i.e., have a unique solution.

**Theorem 4.**

*For any nondegenerate*

*kP*-

*dimensional Gaussian random variable*

*Z*

_{0},

*independent of the Brownian, and any initial process*

*X*

*such that X*(

*t*

_{0})ÃÂ =ÃÂ

*Z*

_{0},

*the map*

_{}

*has a unique fixed point in*

_{}

*toward which the sequence*

_{}

*of Gaussian processes converges in law*.

*Proof.*Since

_{}(respectively

_{}is a Banach space for the uniform norm, the Cauchy sequence μ

^{n}(respectively

*C*

^{n}) of Proposition 3 converges to an element μ of

_{}(respectively an element

*C*of

_{}. Therefore, according to Theorem 1, the sequence

_{}of Gaussian processes converges in law toward the Gaussian process

*Y*with mean function μ and covariance function

*C*. This process is clearly a fixed point of

_{}.

Hence we know that there exists at least one fixed point for the map

_{}. Assume there exist two distinct fixed points*Y*_{1}and*Y*_{2}of_{}with mean functions μ_{i}and covariance functions*C*_{i},*i*ÃÂ =ÃÂ 1, 2, with the same initial condition. Since for all*nÃÂ*≥ÃÂ 1 we have_{}the proof of Proposition 3 shows that_{}(respectively_{}) is upperbounded by the product of a positive number*a*_{n}(respectively*b*_{n}) with ‖μ_{1}ÃÂ −ÃÂ μ_{2}‖_{∞}) (respectively with (‖*C*_{1}ÃÂ −ÃÂ*C*_{2}‖_{∞}). Since lim_{n→∞}*a*_{n}ÃÂ =ÃÂ lim_{n→∞}*b*_{n}ÃÂ =ÃÂ 0 and_{},*i*ÃÂ =ÃÂ 1, 2 (respectively_{},*i*ÃÂ =ÃÂ 1, 2), this shows that μ_{1}ÃÂ =ÃÂ μ_{2}and*C*_{1}ÃÂ =ÃÂ*C*_{2}, hence the two Gaussian processes*Y*_{1}and*Y*_{2}are indistinguishable.We have proved that for any nondegenerate Gaussian initial condition

*Z*_{0}there exists a unique solution of the mean-field equations. The proof of Theorem 4 is constructive, and hence provides a way for computing the solution of the mean-field equations by iterating the map_{}defined in 3.2, starting from any initial process*X*satisfying*X*(*t*_{0})ÃÂ =ÃÂ*Z*_{0}, for instance a Gaussian process such as an Ornstein–Uhlenbeck process. We build upon these facts in Section “Numerical Experiments”.Note that the existence and uniqueness is true whatever the initial time

*t*_{0}and the final time*T*.So far, we have investigated the existence and uniqueness of solutions of the mean-field equation for a given initial condition. We are now interested in investigating stationary solutions, which allow for some simplifications of the formalism.

A stationary solution is a solution whose probability distribution does not change under the flow of the equation. These solutions have been already investigated by several authors (see Brunel and Hakim, 1999
; Sompolinsky et al., 1998
). We propose a new framework to study and simulate these processes. Indeed we show in this section that under a certain contraction condition there exists a unique solution to the stationary problem. As in the previous section our proof is constructive and provides a way to simulate the solutions.

**Remark.**The long-time mean-field description of a network is still a great endeavor in mathematics and statistical physics. In this section we formally take the mean-field equation we obtained and let

*t*

_{0}ÃÂ →ÃÂ −∞. This way we obtain an equation which is the limit of the mean-field equation when

*t*

_{0}ÃÂ →ÃÂ −∞. It means that we consider first the limit

*N*ÃÂ →ÃÂ ∞ and then

*t*

_{0}ÃÂ →ÃÂ −∞. These two limits do not necessarily commute and there are known examples, for instance in spin-glasses, where they do not.

It is clear that in order to get stationary solutions, the stochastic system has to be autonomous. More precisely, we modify Assumption 1 as follows

**Assumption 2.**

(a) The matrixes

**L**(*t*) and**F**(*t*), the input currents**I**(*t*) do not depend upon*t*.(b) The real parts of the eigenvalues of

**L**are negative:for all eigenvalues λ of

**L**.(c) The matrix

*has full rank.***F**Under Assumption (a) of 2, the resolvent Φ

_{L}(*t*,*s*) is equal to*e*^{L(t−s)}. Under Assumption (b) we only consider first-order system since otherwise the matrix**L**has eigenvalues equal to 0. We now prove the following proposition.**Proposition 4.**Under the previous assumptions we have:

1.

2. the process

_{}is well-defined, Gaussian and stationary when*t*_{0}ÃÂ →ÃÂ −∞.*Proof.*The first point property follows from the fact that Re(λ)ÃÂ <ÃÂ −λ

*for all eigenvalues λ of*

_{L}**L**. This assumption also implies that there exists a norm on

^{P}such that

and hence

for some positive constant

*k*. This implies the remaining two properties.We now address the second point of the property. The stochastic integral

_{}is well-defined ∀*t*ÃÂ ≤ÃÂ*T*and is Gaussian with 0-mean. Its covariance matrix reads:Let us assume for instance that

*t*quotidnÃÂ <ÃÂ*t*and perform the change of variable*u*ÃÂ =ÃÂ*t*ÃÂ −ÃÂ*s*to obtain_{}

Under the previous assumptions this matrix integral is defined when

*t*_{0}ÃÂ →ÃÂ −∞ (dominated convergence theorem) and we havewhich is a well-defined function of

*t*quotidnÃÂ −ÃÂ*t*.The second point of Proposition 4 guarantees the existence of process

as the limit of the processes

_{}when*t*_{0}ÃÂ →ÃÂ −∞. This process is a stationary distribution of the equation:it is Gaussian, of mean

_{}and of covariance matrix Σ^{0}is equal to defined by Eq.ÃÂ 50 and which is independent of*t*.We call

*long term mean-field equation*(LTMFE) the implicit equation:where

**X**_{0}is the stationary process defined by Eq.ÃÂ 51 and where**U**(^{V}*t*) is the effective interaction process introduced previously.We next define the long term function

_{}_{}**Proposition 5.**The function

_{}is well-defined on

_{}.

*Proof*. We have already seen that the process

**X**

_{0}is well-defined. The term

_{}is also well-defined because of the assumptions on

**L**.

Let

*X*be a given process in_{}. To prove the proposition we just have to ensure that the Gaussian process_{}is well-defined. This results from the contraction assumption on**L**and the fact that the functions*S*_{β}are bounded. We decompose this process into a “long memory” term_{}and the interaction term from time*t*ÃÂ =ÃÂ 0, namely_{}. This latter term is clearly well-defined. We show that the memory term is also well-defined as a Gaussian random variable.We write this term

_{}and consider the second factor. This random variable is Gaussian, its mean reads_{}whereThe integral defining the mean is well-defined because of Eq.ÃÂ 49 and the fact that the functions

*S*_{β}are bounded. A similar reasoning shows that the corresponding covariance matrix is well-defined. Hence the Gaussian process_{}is well-defined, and hence for any process_{}, the process_{}(*X*) is well-defined.We can now prove the following proposition.

**Proposition 6.**The mean vectors and the covariance matrices of the processes in the image of

_{}are bounded.

*Proof.*Indeed, since

_{}, we have:

In a similar fashion the covariance matrices of the processes in the image of

_{}are bounded. Indeed we have:resulting in

^{}

**Lemma 6.**The set of stationary processes is invariant by

_{}.

*Proof.*Since the processes in the image of

_{}are Gaussian processes, one just needs to check that the mean of the process is constant in time and that its covariance matrix

*C*(

*s*,

*t*) only depends on

*t*ÃÂ −ÃÂ

*s*.

Let

*Z*be a stationary process and*Y*ÃÂ =ÃÂ_{}(*Z*). We denote by_{}the mean of the process*Z*_{α}(*t*) and by_{}its covariance function. The mean of the process_{}reads:and hence does not depends on time. We note μ

*the mean vector of the stationary process*^{Z}**U***ÃÂ·*^{Z}**1**.Similarly, its covariance function reads:

which is clearly a function, noted

_{}, of*t*ÃÂ −ÃÂ*s*. Hence**U***ÃÂ·ÃÂ*^{Z}**1**is stationary and we denote by_{}its covariance function.It follows that the mean of

*Y*reads:_{t}since we proved that

_{}was not a function of*s*.Similarly, we compute the covariance function and check that it can be written as a function of (

*t*ÃÂ −ÃÂ s). Indeed, it reads:since the process

*X*_{0}is stationary.*C*(^{Y}*t*,*s*) is clearly a function of*t*ÃÂ −ÃÂ*s*. Hence*Y*is a stationary process, and the proposition is proved.**Theorem 5.**

*The sequence of processes*.

_{}is uniformly tight*Proof*. The proof is essentially the same as the proof of TheoremÃÂ 3, since we can write

_{}(

*X*)

*appears as the sum of the random variable*

_{t}_{}(

*X*)

_{0}and the Gaussian process defined by

_{}which is equal to F

_{k}(

*X*)

*defined in Section “Existence and Uniqueness of Solutions in Finite Time” for*

_{t}*t*

_{0}ÃÂ =ÃÂ 0. Therefore

_{}for

*t*ÃÂ >ÃÂ 0. We have proved the uniform tightness of the sequence of processes

_{}in Theorem 3. Hence, according to Kolmogorov’s criterion for tightness, we just have to prove that the sequence of Gaussian random variables:

is uniformly tight. Since it is a sequence of Gaussian random variables, it is sufficient to prove that their means are bounded and their covariance matrices upperbounded to obtain that for any εÃÂ >ÃÂ 0 there exists a compact

*K*_{ε}such that for any*n*∈ , we have_{}. This is a consequence of PropositionÃÂ 6 for the first random variable and of the definition of**X**_{0}for the second. By Kolmogorov’s criterion the sequence of processes_{}is uniformly tight.In order to apply Theorem 1 we need to prove that the sequences of covariance and mean functions are convergent. Unlike the case of

*t*_{0}finite, this is not always true. Indeed, to ensure existence and uniqueness of solutions in the stationary case, the parameters of the system have to satisfy a contraction condition, and PropositionÃÂ 3 extends as follows.**Proposition 7.**If λ

*defined in Eq.ÃÂ 48 satisfies the conditions (Eq.ÃÂ 53) defined in the proof, depending upon*

_{L}*k*(defined in Eq.ÃÂ 45),

_{C}*k*

_{0}, μ

*and Σ*

_{LT}*(defined in Proposition 6)then the sequences of covariance matrix functions*

_{LT}*C*(

^{n}*t*,

*s*) and of mean functions μ

*(*

^{n}*t*),

*s*,

*t*in [

*t*

_{0},

*T*] are Cauchy sequences for the uniform norms.

*Proof.*The proof follows that of Proposition 3 with a few modifications that we indicate. In establishing the equation corresponding to Eq.ÃÂ 44 we use the fact that ‖Φ

*(*

_{L}*t*,

*u*)‖

_{∞}ÃÂ ≤ÃÂ

*ke*

^{−λL(t−u)}for some positive constant

*k*and all

*u*,

*t*,

*u*ÃÂ ≤ÃÂ

*t*. We therefore have:

The rest of the proof proceeds the same way as in PropositionÃÂ 3. EquationsÃÂ 46ÃÂ andÃÂ 47 become:

and

for some positive constant

*K*, function of*k*,*k*(defined in Eq.ÃÂ 45), and_{C}*k*_{0}.Proceeding recursively until we reach

*C*^{0}and μ^{0}we obtain an upperbound for ‖*C*^{n+1}(*t*,*s*)ÃÂ −ÃÂ*C*^{n}(*t*,*s*)‖_{∞}(respectively for ‖μ^{n+1}(*t*)ÃÂ −ÃÂ μ^{n}(*t*)‖_{∞}) which is the sum of <5^{n}terms each one being the product of*K*, times 2ÃÂ μ^{n}_{LT}or 2Σ_{LT}, times a 2*n*-dimensional integral*I*_{n}given by:where the functions ρ

_{i}(*u*,_{i}*v*),_{i}*i*ÃÂ =ÃÂ 1,…,*n*are either equal to 1 or to_{}.It can be shown by straightforward calculation that each sub-integral contributes at most either

in the other case. Hence we obtain factors of the type

where 0ÃÂ ≤ÃÂ

*pÃÂ*≤ÃÂ*n.*If λ_{L}ÃÂ <ÃÂ 1, (λ_{L})^{(3nÃÂ +ÃÂ P)/2ÃÂ }≥ÃÂ_{}and else_{}_{}Since we obtain the two conditions^{}

Putting all these results together we obtain the following theorem of existence and uniqueness of solutions for the LTMFE:

**Theorem 6.**

*Under the contraction conditions*(Eq.ÃÂ 53),

*the function*

_{}

*has a unique solution in*,

_{}which is stationary, and for any process X*the sequence*

_{}of Gaussian processes converges in law toward the unique fixed point of the function_{}.

*Proof.*The proof is essentially similar to the one of TheoremÃÂ 4. Indeed, the mean and the covariance matrixes converge since they are Cauchy sequences in the complete space of continuous functions equipped with the uniform norm. Using Theorem 1, we obtain that the sequence converges to a process

*Y*which is necessarily a fixed point of

_{}. Hence we have existence of a fixed point for

_{}. The uniqueness comes from the results obtained in the proof of Proposition 7. The limiting process is necessarily stationary. Indeed, let

*X*be a stationary process. Then for any

*n*ÃÂ ∈ÃÂ , the process

_{}will be stationary by the virtue of Lemma 6, and hence so will be the limiting process which is the only fixed point of

_{}.

Hence in the stationary case, the existence and uniqueness of a solution is not always ensured. For instance if the leaks are too small (i.e., when the time constants of the decay of the membrane potentials are too long) then the sequence can diverge or have multiple fixed points.

### Simulation Algorithm

Beyond the mathematical results, the framework that we introduced in the previous sections gives us a strategy to compute numerically the solutions of the dynamic mean-field equations. Indeed, we proved in Section “Existence and Uniqueness of Solutions in Finite Time” that under very moderate assumptions on the covariance matrix of the noise, the iterations of the map

_{}starting from any initial condition converge to the solution of the mean-field equations.This convergence result gives us a direct way to compute numerically the solution of the mean-field equations. Since we are dealing with Gaussian processes, determining the law of the iterates of the map

_{}amounts to computing its mean and covariance functions. In this section we describe our numerical algorithm in the case of the Simple Model of Section “Example I: The Simple Model”.#### Computing _{}

Let

*X*be a*P*-dimensional Gaussian process of mean μ*ÃÂ =*^{X}_{}ÃÂ and covariance_{}We fix a time interval [*t*_{0}ÃÂ =ÃÂ 0,*T*] and denote by*Y*the image of the process*X*under_{1}. In the case of the simple model, the covariance of*Y*is diagonal. Hence in this case the expressions we obtain in Section “Existence and Uniqueness of Solutions in Finite Time” simply read:where we denoted

_{}the standard deviation of*X*_{α}at time*s*, instead of_{}Thus, knowing_{}*s*ÃÂ ∈ÃÂ [0,*t*] we can compute_{}using a standard discretization scheme of the integral, with a small time step compared with τ_{α}and the characteristic time of variation of the input current*I*_{α}. Alternatively, we can use the fact that_{}satisfies the differential equation:and compute the solution using a Runge–Kutta algorithm (which is faster and more accurate). Note that, when all the standard deviations of the process

*X*are null for all time*t*ÃÂ ∈ÃÂ [0,*T*], we obtain a standard dynamical system. Nevertheless, in the general case,_{}for some β’s, and the dynamical evolution of_{}depends on the Gaussian fluctuations of the field*X*. These fluctuations must be computed via the complete equation of the covariance diagonal coefficient_{}, which reads:where:

Unless if we assume the stationarity of the process (see e.g., Section “The Importance of the Covariance: Simple Model, One Population”), this equation cannot be written as an ordinary differential equation. We clearly observe here the non-Markovian nature of the problem:

_{}depends on the whole past of the process until time*t*ÃÂ ÃÂ*s*.This covariance can be split into the sum of two terms: the external noise contribution

_{}and the interaction between the neurons. The external noise contribution is a simple function and can be computed directly. To compute the interactions contribution to the standard deviation we have to compute the symmetric two-variables function:from which one obtains the standard deviation using the formula

To compute the function

_{}we start from*t*ÃÂ =ÃÂ 0 and*s*ÃÂ =ÃÂ 0, where_{}We only compute_{}for*t*ÃÂ >ÃÂ*s*because of the symmetry. It is straightforward to see that:with

Hence computing

_{}knowing_{}amounts to computing_{}Fix*t*ÃÂ ≥ÃÂ 0. We have_{}andThis algorithm enables us to compute

_{}for*t*ÃÂ >ÃÂ*s*. We deduce_{}for*t*ÃÂ <ÃÂ*s*using the symmetry of this function. Finally, to get the values of_{}for*t*ÃÂ =ÃÂ*s*, we use the symmetry property of this function and get:These numerical schemes provide an efficient way for computing the mean and the covariance functions of the Gaussian process

_{1}(*X*) (hence its probability distribution) knowing the law of the Gaussian process*X*. The algorithm used to compute the solution of the mean-field equations for the general models GM1 and GM*k*is a straightforward generalization.#### Analysis of the algorithm

**As proved in Theorem 4, given**

*Convergence rate.**Z*

_{0}a nondegenerate

*kP*-dimensional Gaussian random variable and

*X*a Gaussian process such that

*X*(0)ÃÂ =ÃÂ

*Z*

_{0}, the sequences of mean values and covariance functions computed theoretically converge uniformly toward those of the unique fixed point of the map

_{}. It is clear that our algorithm converges uniformly toward the real function it emulates. Hence for a finite

*N*, the algorithm will converge uniformly toward the mean and covariance matrix of the process

_{}.

Denote by

*X*the fixed point of_{f}_{k}in_{}of mean_{}and covariance matrix_{}and by_{}the numerical approximation of_{}computed using the algorithm previously described, whose mean is noted_{}and whose covariance matrix is noted_{}The uniform error between the simulated mean after*N*iterations with a time step*dt*and the fixed point’s mean and covariance is the sum of the numerical error of the algorithm and the distance between the simulated process and the fixed point, is controlled by:where k

_{max}ÃÂ =ÃÂ max(*k*, ) and*k*and ) are the constants that appear inÃÂ the proof of Proposition 3 for the mean and covariance functions, and R*(*_{N}*x*) is the exponential remainder, i.e.,_{}Indeed, we have:

The discretization algorithm used converges in

*O*(*dt*). Let us denote by*C*_{1}the convergence constant, which depends on the sharpness of the function we approximate, which can be uniformly controlled over the iterations. Iterating the numerical algorithm has the effect of propagating the errors. Using these simple remarks we can bound the first term of the righthand side of Eq.ÃÂ 55, i.e., the approximation error at the*N*th iteration:Because the sequence of mean values is a Cauchy sequence, we can also bound the second term of the righthand side of Eq.ÃÂ 55:

for some positive constant

*k*introduced in the proof of PropositionÃÂ 3. The remainders sequence (*R*(_{n}*k*))_{nÃÂ ≥ÃÂ 0}converges fast toward 0 (an estimation of its convergence can be obtained using the fact that_{}by Stirling’s formula).Hence we have:

For the covariance, the principle of the approximation is exactly the same:

The second term of the righthand side can be controlled using the same evaluation by

_{}where is the constant introduced in the proof of Proposition 3, and the first term is controlled by the rate of convergence of the approximation of the double integral, which is bounded by*C*_{2}(*N*ÃÂ +ÃÂ*T*)*dt*where*C*_{2}depends on the parameters of the system and the discretization algorithm used.Hence we have:

The expressions (Eqs.ÃÂ 56ÃÂ andÃÂ 57) are the sum of two terms, one of which is increasing with

*N*and*T*and decreasing with*dt*and the other one decreasing in*N*. If we want to obtain an estimation with an error bounded by some εÃÂ >ÃÂ 0, we can for instance fix*N*such that_{}and then fix the time step*dt*smaller than_{}.*The complexity of the algorithm depends on the complexity of the computations of the integrals. The algorithm described hence has the complexity*

**Complexity.**_{}.

### The Importance of the Covariance: Simple Model, One Population

As a first example and a benchmark for our numerical scheme we revisit the work of Sompolinsky et al. (1998)
. These authors studied the case of the simple model with one population (

*P*ÃÂ =ÃÂ 1), with the centered sigmoidal function*S*(*x*)ÃÂ =ÃÂ tanh(*gx*), centered connectivity weights of standard deviation σÃÂ =ÃÂ 1 and no input (*I*ÃÂ =ÃÂ 0; ΛÃÂ =ÃÂ 0). Note therefore that there is no “noise” in the system, which therefore does not match the nondegeneracy conditions of Proposition 2 and of Theorem 4. This issue is discussed below. In this case, the mean equals 0 for all*t*. Nevertheless, the Gaussian process is nontrivial as revealed by the study of the covariance*C*(*t*,*s*).#### Stationary solutions

Assuming that the solution of the mean-field equation is a stationary solution with

_{}, Sompolinsky and his collaborators found that the covariance obeyed a second-order differential equation:This form corresponds to the motion of a particle in a potential well and it is easy to draw the phase portrait of the corresponding dynamical system. However, there is a difficulty. The potential

*V*depends on a parameter_{q}*q*which is in fact precisely the covariance at τÃÂ =ÃÂ 0 (*q*ÃÂ =ÃÂ*C*(0)). In the stationary case, this covariance depends on the whole solution, and hence cannot be really considered as a parameter of the system. This is one of the main difficulties in this approach: mean-field equations in the stationary regime are self-consistent.Nevertheless, the study of the shape of

*V*, considering_{q}*q*as a free parameter gives us some information. Indeed,*V*has the following Taylor expansion (_{q}*V*is even because_{q}*S*is odd):where

_{}and_{}〈Φ〉*being the average value of Φ under the Gaussian distribution with mean 0 and variance*_{q}*qÃÂ*=ÃÂ*C*(0).If λÃÂ >ÃÂ 0, i.e., when

_{}then the dynamical system (Eq.ÃÂ 58) has a unique solution*C*(*t*)ÃÂ =ÃÂ 0, . This corresponds to a stable fixed point (i.e., a deterministic trajectory, μÃÂ =ÃÂ 0 with no fluctuations) for the neural network dynamics. On the other hand, if_{}there is a homoclinic trajectory in Eq.ÃÂ 58 connecting the point*q*ÃÂ =ÃÂ*C**ÃÂ >ÃÂ 0 where*V*vanishes to the point_{q}*C*ÃÂ =ÃÂ 0. This solution is interpreted by the authors as a chaotic solution in the neural network. A stability analysis shows that this is the only stable^{ 7 }stationary solution (Sompolinsky et al., 1998 ).The equation for the homoclinic solution is easily found usingÃÂ energy conservation and the fact that

*V*(_{q}*q*)ÃÂ =ÃÂ 0 and_{}One finds:At the fourth-order in the Taylor expansion of

*V*this gives_{q}Though λ depends on

*q*it can be used as a free parameter for interpolating the curve of*C*(τ) obtained from numerical data.#### Numerical experiments

This case is a good benchmark for our numerical procedure since we know analytically the solutions we are searching for. We expect to find two regimes. In one case the correlation function is identically 0 in the stationary regime, for sufficiently small

*g*values or for a sufficiently small*q*(trivial case). The other case corresponds to a regime where*C*(τ)ÃÂ >ÃÂ 0 and*C*(τ)ÃÂ →ÃÂ 0 has τ*ÃÂ*→*ÃÂ*+ÃÂ ∞ (“chaotic” case). This regime requires that*g*be sufficiently large and that*q*be large too. We took τ_{α}ÃÂ =ÃÂ 0:25, σ_{αα}ÃÂ =ÃÂ 1. For these values, the change in dynamics predicted by Sompolinsky and collaborators is*g*ÃÂ =ÃÂ 4._{c}In Sections “Existence and Uniqueness of Solutions in Finite Time” and “Existence and Uniqueness of Stationary Solutions” we have introduced the assumption of nondegeneracy of the noise, in order to ensure that the mean-field process was nondegenerate. However, in the present example, there is no external noise in the evolution, so we can observe the effects of relaxing this hypothesis in a situation where the results of Proposition 2 and of TheoremÃÂ 4 cannot be applied. First, we observed numerically that, without external noise, the process could become degenerate [namely some eigenvalues of the covariance matrix

*C*_{α}(*t*,*s*) become very small and even vanish.]. This has also an incidence on the convergence of the method which presents numerical instabilities, though the iterations leads to a curve which is well fitted by the theoretical results of Sompolinsky etÃÂ al. (see Figure 3 ). The instability essentially disappears if one adds a small noise. But, note that in this case, the solution does not match with Sompolinsky etÃÂ al. theoretical calculation (see Figure 3 ).**FigureÃÂ 3. Numerical solution of the mean-field equation after 14 iterations in the chaotic case (**We clearly see the numerical instabilities in the no-noise case, which do not exist in the low-noise case.

**ÃÂ =ÃÂ 5).***g*Modulo this remark, we have first considered the trivial case corresponding to small

*g*values. We took*g*ÃÂ =ÃÂ 0.5 and*T*ÃÂ =ÃÂ 5. We choose as initial process the stationary Ornstein–Uhlenbeck process corresponding to the uncoupled system with ΛÃÂ =ÃÂ 0.1. We drew μ_{α}(0) randomly from the uniform distribution in [−1, 1] and*v*_{α}(0) randomly from the uniform distribution in [0, 1].Starting from this initial stationary process, we iterated the function

_{1}. Then, during the iterations, we set*s*_{α}ÃÂ =ÃÂ 0 in order to match the conditions imposed by Sompolinsky and colleagues. We observe that the method converges toward the expected solution: the mean function converges to 0, while the variance*v*(*t*) decreases exponentially fast in time toward a constant value corresponding to the stationary regime. This asymptotic value decreases between two consecutive iterations, which is consistent with the theoretical expectation that*v*(*t*)ÃÂ =ÃÂ 0 in the stationary regime of the trivial case. Finally, we observe that the covariance*C*(*t*ÃÂ −ÃÂ*s*,*s*) stabilizes to a curve that does not depend on*s*and the stationary value (large*t*ÃÂ −ÃÂ*s*) converges to 0.We applied the same procedure for

*g*ÃÂ =ÃÂ 5 corresponding to the “chaotic” regime. The behavior was the same for μ(*t*) but was quite different for the covariance function*C*(*t*,*s*). Indeed, while in theÃÂ first case the stationary value of*v*(*t*) tends to 0 with the number of iterations, in the chaotic case it stabilizes to a finite value. In the same way, the covariance*C*(*t*ÃÂ −ÃÂ*s*,*s*) stabilizes to a curve that does not depend on*s*. The shape of this curve can be extrapolated thanks to Sompolinsky etÃÂ al. results. We observe a very good agreement with the theoretical predictions with a fit_{}corresponding to the fourth expansion of*V*. Using a sixth-order expansion of_{q}_{}gives a fitwhere ρ,

*K*, λ are explicit functions of*a*,*b*,*c*, we obtain a slightly better approximation.### Mean-Field Equations for Two Populations with a Negative Feedback Loop

Let us now present a case where the fluctuations of the Gaussian field act on the dynamics of μ

_{α}(*t*) in a nontrivial way, with a behavior strongly departing from the naive mean-field picture. We consider two interacting populations where the connectivity weights are Gaussian random variables_{}for (α, β)ÃÂ ∈ {1, 2}^{2}. We set*S*_{β}(*x*)ÃÂ =ÃÂ tanh(*gx*) and*I*_{α}ÃÂ =ÃÂ 0,*s*_{α}ÃÂ =ÃÂ 0, αÃÂ =ÃÂ 1, 2.#### Theoretical framework

The dynamic mean-field equation for μ

_{α}(*t*) is given, in differential form, by:Let us denote by G

_{α}(μ,*v*(*t*)) the function in the righthand side of the equality. Since*S*is odd,_{}Therefore, we have*G*_{α}(0,*v*(*t*))ÃÂ =ÃÂ 0 whatever*v*(*t*), and hence the point μ_{1}ÃÂ =ÃÂ 0, μ_{2}ÃÂ =ÃÂ 0 is always a fixed point of this equation.Let us study the stability of this fixed point. To this purpose, we compute the partial derivatives of

*G*_{α}(μ,*v*(*t*)) with respect to μ_{β}for (α, β)ÃÂ ∈ÃÂ {1, 2}^{2}. We have:and hence at the point μ

_{1}ÃÂ =ÃÂ 0, μ_{2}ÃÂ =ÃÂ 0, these derivatives read:where

_{}In the case

*v*_{α}(0)ÃÂ =ÃÂ 0,*J*ÃÂ =ÃÂ 0,*s*_{α}ÃÂ =ÃÂ 0, implying*v*_{α}(*t*)ÃÂ =ÃÂ 0,*t*ÃÂ ≥ÃÂ 0, the equation for μ_{α}reduces to:which is the standard Amari–Cohen–Grossberg–Hopfield system. This corresponds to the naive mean-field approach where Gaussian fluctuations are neglected. In this case the stability of the fixed point μÃÂ =ÃÂ 0 is given by the sign of the largest eigenvalue of the Jacobian matrix of the system that reads:

For the sake of simplicity we assume that the two time constants τ

_{α}are equal and we denote this value τ. The eigenvalues are in this case_{}where λ are the eigenvalues of_{}and have the form:Hence, they are complex whenever

_{}corresponding to a negative feedback loop between population 1 and 2. Moreover, they have a real part only if_{}is nonzero (self interaction).This opens up the possibility to have an instability of the fixed point (μÃÂ =ÃÂ 0) leading to a regime where the average value of the membrane potential oscillates. This occurs if

_{}and if*g*is larger than:The corresponding bifurcation is a Hopf bifurcation.

The situation is different if one takes into account the fluctuations of the Gaussian field. Indeed, in this case the stability of the fixed point μÃÂ =ÃÂ 0 depends on

*v*(*t*). More precisely, the real and imaginary part of the eigenvalues of*DG*(0,*v*(*t*)) depend on*v*(*t*). Therefore, the variations of*v*(*t*) act on the stability and oscillations period of*v*(*t*). Though the evolution of μ(*t*),*v*(*t*) are coupled we cannot consider this evolution as a coupled dynamical system, since*v*(*t*)ÃÂ =ÃÂ*C*(*t*,*t*) is determined by the mean-field equation for*C*(*t*,*s*) which cannot be written as an ordinary differential equation. Note that we cannot assume stationarity here, as in the previous case, since μ(*t*) depends on time for sufficiently large*g*. This opens up the possibility of having complex dynamical regimes when*g*is large.#### Numerical experiments

We have considered the case

_{}giving a Hopf bifurcation for*g*ÃÂ =ÃÂ 2 when_{c}*J*ÃÂ =ÃÂ 0 (Figure 4 ). The trajectory of μ_{1}(*t*) and*v*_{1}(*t*) is represented in Figure 4 in the case*g*ÃÂ =ÃÂ 3. When*J*ÃÂ =ÃÂ 0, μ_{1}(*t*) presents regular oscillations (with non-linear effects since*g*ÃÂ =ÃÂ 3 is larger than the critical value for the Hopf bifurcation,*g*ÃÂ =ÃÂ 2). In this case, the solution_{c}*v*_{1}(*t*)ÃÂ =ÃÂ 0 is stable as seen on the figure. When*J*ÃÂ ≠ÃÂ 0 the Gaussian field has (small) fluctuations which nevertheless strongly interact with the dynamics of μ_{1}(*t*), leading to a regime where μ_{1}(*t*) and*v*_{1}(*t*) oscillate periodically.**FigureÃÂ 4. Evolution of the mean ÃÂµ**n is the number of iterations of

_{1}(**t**) and variance_{1}(**t**) for the mean-field of population 1, for**J**ÃÂ =ÃÂ 0 and 2, over a time window [0, 20]._{1}defined in Section “Existence and Uniqueness of Solutions in Finite Time”. This corresponds to a number of iterations for which the method has essentially converged (up to some precision). Note that

_{1}(t) has been magnified by a factor of 100. Though Gaussian fluctuations are small, they have a strong influence on μ

_{1}(t).

The problem of bridging scales is overwhelming in general when studying complex systems and in particular in neuroscience. After many others we looked at this difficult problem from the theoretical and numerical viewpoints, hoping to get closer to its solution from relatively simple and physically/biologically plausible first principles and assumptions. One of our motivations is to better understand such phenomenological neural mass models as that of Jansen and Rit (1995)
.

We consider several populations of neurons and start from a microscopic, i.e., individual, description of the dynamics of the membrane potential of each neuron that contains four terms.

The first one controls the intrinsic dynamics of the neuron. It is linear in this article but this assumption is not essential and could probably be safely removed if needed.

The second term is a stochastic input current, correlated or uncorrelated. The corresponding noise model is very rich, depending on the degree

*k*of smoothness of the g-shapes. It features integrated Brownian noise up to order*k*ÃÂ −ÃÂ 1.The third term is a deterministic input current, and the fourth one describes the interaction between the neurons through random connectivity coefficients that weigh the contributions of other neurons through a set of functions that are applied to their membranes potentials. The only hypothesis on these functions is that they are smooth and bounded, as well as their first-order derivative. The obvious choice of sigmoids is motivated by standard rate models ideas. Another appealing choice is a smooth approximation to a Dirac delta function thereby opening a window on the world of spiking neurons. Thus, the model presented in this paper is

*more general*than the instantaneous rate model that is underlying Ermentrout’s voltage-based model (Ermentrout, 1998 ) even though we have not explored this avenue.We then derive the mean-field equations and provide a constructive and new proof, under some mild assumptions, of the existence and uniqueness of a solution of these equations over finite and infinite time intervals. The key idea is to look at this mean-field description as a

*global problem*on the probability distribution of the membranes potentials, unlike previous studies. Our proof provides an efficient way of computing this solution and our numerical experiments show a good agreement with previous studies. It is interesting to note that a sufficient condition for the convergence of our algorithm is related to the previously mentioned noise model. We prove that if the noise matrix*is full rank, with bounded eigenvalues, then the algorithm is in general convergent. An important fact to note is that the solutions of the mean-field equations that we construct are fundamentally***F***non-Markovian*eliminating the need for such approximations as the introduction of the*q*parameter summarizing the whole history of the non-Markovian process, see below.In the case where the nonlinearities are chosen to be sigmoidal our results shed a new light on existing neural mass models. Indeed, as shown in Section “General Derivation of the Mean-Field Equation”, these appear as approximations of the mean-field equations where the intricate but fundamental coupling between the time variations of the mean membrane potentials and their fluctuations, as represented by the covariance functions, is neglected.

An alternative approach has been recently proposed by Chizhov and collaborators

^{ 8 }(Chizhov and Graham, 2007 ; Chizhov et al., 2007 ). The approach of these authors consists in reducing the large number,*N*, of degrees of freedom of the neural assembly by constructing a probability density ρ on the phase space of neurons states in the limit*N*ÃÂ →ÃÂ ∞. This is a non-rigorous approach where the evolution equations for ρ are heuristically derived. Especially, it is assumed that ρ depends on two parameters only: the current time*t*and the time elapsed since the last spike*t**. Under these assumptions the initial phase space of neurons states is mapped to a two dimensional space*t*,*t**, while ρ(*t*,*t**)*dt*characterizes the fraction of neurons which have fired in the time interval [*t*ÃÂ −ÃÂ*t**,*t*ÃÂ −ÃÂ*t**ÃÂ +ÃÂ*dt*]. Therefore, this approach intrinsically holds for integrate and fire neurons models where the neuron’s membrane potential history is summarized by the last spike time, when it is reset to a constant value. As noticed by these authors, this allows to circumvent the main problem in mean-field approaches for firing rate models, that we also discuss in the present paper: When using mean-field theory to characterize stationary regimes, one needs to introduce ad hoc parameters (see e.g., the parameter*q*introduced in Section “Stationary Solutions”) summarizing the whole history of the non-Markovian process. Introducing a “history cut-off” while resetting the membrane potential to a constant value indeed removes this difficulty. Therefore, it might be interesting to compare our approach in the case of integrate-and-fire models (see above remark on the choice of the nonlinearity), to the approach of Chizhov and collaborators. This could provide some rigorous basis for their analysis and allow to elucidate the role of field fluctuations which does not appear explicitly in the probability density approach.On more general grounds, our goal is now to extend the present work in several directions.

### Bifurcations Analysis of the Dynamic Mean-Field Equations

From the present analysis, and as shown in the simple examples of Section “Numerical Experiments”, the mesoscopic dynamics of the average membrane potential of a neurons population can be really different from the classical phenomenological equations la Jansen–Rit if one includes the non-Markovian fluctuations of the interaction fields, which summarize the cumulative effects of the nonlinear interactions of a given neuron with the bulk of other neurons. Jansen–Rit equations are commonly used in the neuroscience community either to anticipate the dynamics of local field potential in relation with imaging (Optical Imaging, MEGEEG), or to understand neurophysiological disorders such as epilepsy. Bifurcations analysis of these equations reveal dynamical regimes that can be related to experiments (Grimbert and Faugeras, 2006
). They can be generalized using more accurate neural models (Wendling et al., 2005
). Is there any need to generalize these equations, that we claim to be incomplete, while people commonly use them with some satisfaction? Are there new phenomena,

*experimentally accessible*, that can be exhibited by the generalized mean-field equations and that do not appear in the naive ones? These are obviously important questions that we intend to address in the near future. On mathematical grounds, the goal is to make a bifurcation analysis of the map on the space of trajectories, introduced in the present paper. Do any new salient dynamical regimes appear? If such regimes exist, the goal will be, on experimental grounds, to interact with experimentalists in order to see in which conditions such a regime can be exhibited, and what are its implications on cortical columns dynamics or function.### Investigations of Nonstationary Regimes

As discussed in this paper, and as is well-known in the physicists’ community (especially spin-glasses community), the dynamic mean-field approach raises serious difficulties as far as one is trying to describe

*stationary dynamics*. On technical grounds, this relies on the non-commutativity of the two limits*N*ÃÂ →ÃÂ ∞ and*t*ÃÂ →ÃÂ ∞ already discussed in Sompolinsky and Zippelius (1982) . As a result, one is led to introduce*ad hoc*phenomenological parameters, depending on initial conditions, that can be determined in statistical physics models where the distribution of equilibria is known (Gibbs distribution), using sophisticated techniques such as the replica “trick” (Houghton et al., 1983 ). For spin-glasses it is only in the high temperature regime that a simple solution to this problem is known. This restriction also appears in the present paper, where the existence and uniqueness of a stationary solution is proved only for low values of the gain parameter*g*(which plays a role similar to the inverse temperature). However, we are not so much interested in stationary dynamics, since brain processes are ultimately nonstationary. Our approach, valid for any finite timeÃÂ*T*, opens up the possibility to characterize mean-field equations in transient regimes, with an analysis strategy that can moreover be easily implemented. To the best of our knowledge, this type of techniques has never been used in the statistical physics community, where iterations on space trajectories are not in the standard toolbox. Therefore, our work could allow the (numerical) investigation of cortical columns submitted to nonstationary inputs, with strong implications on neuroscience.### Extension to A Larger Class of Models

A very challenging question is the application of this theory to spiking neurons models. We have briefly mentioned in Section “Discussion” that this may be possible through the use of non-sigmoidal functions in the interaction terms. This idea could be applied to the analysis of Integrate and Fire models with conductance based synapses, which constitute good models of spiking neurons. As discussed at the end of Section “Discussion”, the analysis of the mean-field equations could be simplified by the fact that memory is reset after a neuron fires. There is however a need to characterize parameter space regions where neurons can take an arbitrary large time to fire for the first time (Cessac, 2008
; Cessac and Viéville, 2008
). This is the main obstruction in the application of our theory to this type of models.

### A. Identification of the Mean-Field Equations

Ben-Arous and Guionnet studied from a mathematical point of view the problem of finding a mean-field description of large networks of spin-glasses. They obtained using different methods of stochastic analysis a weak limit of the law of a given spin and proved their independence.

Our equations do not directly fit in their study: indeed, the spin intrinsic dynamics is nonlinear while the interaction is linear, and everything in done in dimension one. Nevertheless, their proof extends to our case which is somehow more simple. For instance in the case of the Simple Model with one population, we can readily adapt their proof in our case. More precisely, let

*P*ÃÂ =ÃÂ 1, the equation of the network reads:In this case, we define for

_{}the effective interaction term_{}which is the effective interaction process defined in Section “The Mean-Field Equations”, i.e., the Gaussian process of mean_{}and of covariance:_{}Let us note the law of the membrane potential when there is no interaction (it is an Ornstein–Uhlenbeck process), and the empirical measure

_{}We can prove that under theÃÂ probability distribution averaged over the connectivities, see below, the empirical measure satisfies a large deviation principle with good rate function*H*defined as in Guionnet (1997) . Using this large deviation result, we can prove annealed and quenched tightness of the empirical measure, and finally its convergence toward the unique process where the good rate function*H*achieves its unique minimum, which is defined by the property of having a density with respect to and whose density satisfies the implicit equation:where ε denotes the expectation over the effective interaction process

*U*.^{Q}We can also prove following the steps of Ben-Arous and Guionnet (1997)
that there exists a unique solution to this equation, and that this solution satisfies the nonlinear non-Markovian stochastic differential equation:

which can also be written as our mean-field equation, averaged on the connectivities (see Ben-Arous and Guionnet, 1995
). More precisely, let

*L*be the law of the solution of the equation:^{V}which is exactly Eq.ÃÂ 33. They prove that

*V*satisfies the nonlinear equation:This result is probably extendable to the multi-population case using the multidimensional Girsanov’s theorem, but the corresponding mathematical developments are out of the scope of this paper.

### B. The Resolvent

In this appendix we introduce and give some useful properties of the resolvent Φ

*of a homogeneous differential equation:*_{L}where

_{}.**Definition B.1.**The resolvent of Eq.ÃÂ 61 is defined as the unique solution of the linear equation:

where Id

*is the*_{P}*P*ÃÂ ×ÃÂ*P*identity matrix.**Proposition B.1.**The resolvent satisfies the following properties:

(i) Φ

*(*_{L}*t*ÃÂ +ÃÂ*s*,*t*_{0})ÃÂ =ÃÂ Φ*(*_{L}*t*ÃÂ +ÃÂ*s*,*t*)ÃÂ ÃÂ·ÃÂ Φ*(*_{L}*t*,*t*_{0})(ii) Φ

*(*_{L}*t*,*t*_{0}) is invertible of inverse Φ*(*_{L}*t*_{0},*t*) which satisfies:(iii) Let ‖ ‖ be a norm on

_{PÃÂ ×ÃÂ P}and assume that_{}on [*t*_{0},ÃÂ*T*]. Then we have:Similarly, if

_{}on [*t*_{0},*T*] we have:(iv) We have

*Proof.*The properties (i) and (ii) are directly linked with the property of group of the flow of a reversible ODE. (iii) is an application of Grunwald’s lemma. (iv) is obtained by a first-order Taylor series expansion.

*:*

**Theorem B.2 (Solution of an inhomogeneous linear SDE)**. The solution of the inhomogeneous linear stochastic differential equation*can be written using the resolvent*:

*Proof.*Pathwise (strong) uniqueness of solution directly comes from the results on the SDE with Lipschitz coefficients (see e.g., Karatzas and Shreve, 1991 , Theorem 2.5 of Chapter 5). It is clear that

*X*

_{t}_{0}ÃÂ =ÃÂ

*X*

_{0}. We use Itô’s formula for the product of two stochastic processes to prove that the process Eq.ÃÂ 67 is solution of Eq.ÃÂ 66:

Hence the theorem is proved.

### C. Matrix Norms

In this section we recall some definitions on matrix and vector norms. Let

_{n ×ÃÂ n}be the set of*n*ÃÂ ×ÃÂ*n*real matrices. It is a vector space of dimension*n*^{2}and the usual*L*norms 1ÃÂ ≤ÃÂ^{p}*p*ÃÂ ≤ÃÂ ∞ can be defined. Given**L**ÃÂ ∈ÃÂ_{n ×ÃÂ n}, we note_{}the corresponding norm. Given a vector norm, noted ‖ ‖, on^{n}the induced norm, noted ‖ ‖, on_{n ×ÃÂ n}is defined asSince

*n*×_{n}is finite dimensional all norms are equivalent. In this article we use the following norms(i)

_{}(ii)

_{}(iii)

_{}This so-called*spectral*norm is equal to the square root of the largest singular value of**L**which isÃÂ the largest eigenvalue of the positive matrix**L**^{T}**L**. If**L**is positive definite this is its largest eigenvalue which is also called its spectral radius, noted ρ(**L**).### D. Important Constants

Table 1
summarizes some notations which are introduced in the article and used in several places.

### E. Proof of Lemma 2

**Lemma E.1.**The following upperbounds are valid for all

*n*ÃÂ ≥ÃÂ 1 and all

*s*,

*tÃÂ*∈ÃÂ [

*t*

_{0},

*T*].

where μ and σ

_{max}are defined in Lemma 1,_{}is defined in Assumption 1.*Proof.*The first inequality follows from taking the infinite norm of both sides of Eq.ÃÂ 41 and using Assumption (a) in 1 and Eq.ÃÂ 64, Lemma 1, and Assumption (c) in 1.

The second inequality follows from taking the infinite norm of both sides of Eq.ÃÂ 42 and using Assumption (a) in 1 and Eqs.ÃÂ 64 andÃÂ 65, Lemma 1, and Assumption (b) in 1.

### F. Proof of Lemma 3

**Lemma F.1.**For all

*t*ÃÂ ∈ÃÂ [

*t*

_{0},

*T*] all αÃÂ =ÃÂ 1,…,

*kP*, and

*n*ÃÂ ≥ÃÂ 1, we have

where λ

_{min}is the smallest singular value of the symmetric positive definite matrix Φ*(*_{L}*t*,*t*_{0})Φ*(*_{L}*t*,*t*_{0})*for*^{T}*t*ÃÂ ∈ÃÂ [*t*_{0},*T*] and_{}is the smallest eigenvalue of the symmetric positive definite covariance matrix*Proof.*

_{}is larger than

_{}which is larger than the smallest eigenvalue of the matrix

_{}This smallest eigenvalue is equal to

In the last expression the first term is larger than the smallest eigenvalue

_{}of the matrix which is positive definite since we have assumed the Gaussian random variable*Z*_{0}nondegenerate. The second term is equal to the smallest singular value_{}of the matrix_{}which is also strictly positive since_{}is invertible for all*t*ÃÂ ∈ÃÂ [*t*_{0},*T*], see Appendix B.### G. Proof of Lemma 4

**Lemma G.1.**For all αÃÂ =ÃÂ 1,…,

*kP*and

*n*ÃÂ ≥ÃÂ 1 the quantity

_{}is lowerbounded by the positive symmetric function:

where

_{}is the strictly positive lower bound, introduced in 3.1, on the singular values of the matrix*(***F***u*) for*u*ÃÂ ∈ÃÂ [*t*_{0},*T*].*Proof.*We use Eq.ÃÂ 42 which we rewrite as follows, using the group property of the resolvent Φ

*:*

_{L}We now assume

*s*ÃÂ <ÃÂ*t*and introduce the following notations:Let

*e*_{α}, αÃÂ =ÃÂ 1,…,*kP*, be the unit vector of the canonical basis whose coordinates are all equal to 0 except the αth one which is equal to 1. We note*E*_{α}(*t*) the vector_{}We have, dropping the index*n*for simplicity:Note that the last expression does not depend on

*s*, since*A*(*s*)ÃÂ +ÃÂ*B*(*s*,*t*)ÃÂ =ÃÂ*A*(*t*), which is consistent with the first equality. The reason why we introduce*s*in this expression is to simplify the following calculations.The expression

_{}is the sum of four sub-expressions:which is ≥0 because

*A*(*s*) is a covariance matrix,which is also ≥0 because

*a*(*t*,*s*) is a covariance matrix function,Because

*a*(*t*,*s*) is a covariance matrix function, we haveand, as seen above,

_{}Because_{}we also haveand, as it can be readily verified, this implies

_{}Therefore, we can lowerbound

_{}by the fourth subexpression:since

*B*(*s*,*t*) and*a*(*s*,*s*) are covariance matrixes. We next haveby definition of

_{}Thereforewhere

_{}is the smallest eigenvalue of the symmetric positive matrix*C*. Similarly we haveLet us write γ(

*u*)ÃÂ =ÃÂ*(***F***u*)*(***F***u*)*. We have (Assumption 1):*^{T}Combining these results, we have

### H. Proof of Lemma 5

**Lemma H.1.**The 2

*n*-dimensional integral

where the functions ρ

*(*_{i}*u*,_{i}*v*),_{i}*i*ÃÂ =ÃÂ 1,…,*n*are either equal to 1 or to_{}(the function θ is defined in Lemma 4), is upperbounded by*k*/(^{n}*n*ÃÂ −ÃÂ 1)! for some positive constant*k*.*Proof.*First note that the integral is well-defined because of Lemma 4. Second, note that there exists a constant

*K*such that

_{}for all (

*u*,

*v*)ÃÂ ∈ÃÂ [

*t*

_{0},

*t*ÃÂ

*s*]

^{2},

i.e.,

_{}. Therefore the integral is upperbounded by_{}where*K*_{0}ÃÂ =ÃÂ max(1,*K*) times the integral obtained when_{}for all*i*ÃÂ =ÃÂ 1,…,*n*. Let us then consider this situation. Without loss of generality we assume*t*_{0}ÃÂ =ÃÂ 0. The cases*n*ÃÂ =ÃÂ 1, 2, 3 allow one to understand the process.Let us rotate the axes by

_{}by performing the change of variablesUsing the symmetry of the integrand in

*s*and*t*and the change of variable, the integral in the righthand side of Eq.ÃÂ 68 is equal to (see Figure 5 ):**FigureÃÂ 5. The change of coordinates.**

where

_{}and_{}Let us now look at

*I*_{2}. It is upperbounded by the factor_{}times the integral^{}

Since in the area of integration

_{}we are led to the product of 2/5 by the one-dimensional integralwhere

_{}Similarly

*I*_{3}is upperbounded by the product of_{}times the integralwhere

_{}One easily shows then that:It can be verified by using a system for symbolic computation that 0ÃÂ <ÃÂ α

*ÃÂ <ÃÂ 1 for all*_{i}*i*ÃÂ ≥ÃÂ 1. One also notices thattherefore

and this finishes the proof.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

This research was partly supported by funding of the European Union under the grant no. 15879 (FACETS) and the Fondation d’Entreprise EADS. It was also supported by the MACACC ARC INRIA and the Doeblin CNRS Federation.

**^**When we come to the mean-field equations they will be modeled as random variables.**^**We have modified the original model which is not voltage-based.**^**We note Γ(*t*) the matrix(**F***t*)(**F***t*).^{T}**^**In Bogachev (1998 ; Chapter 3.8), the property is stated whenever the mean and covariance are defined on a separable Hilbert space.**^**For simplicity we abuse notations and identify and*X*.**^**The notation is introduced in Appendix C.**^**More precisely, this is the only minimum for the large deviation functional.**^**We thank one of the referees for pointing out these references to us.