Frontiers in Computational Neuroscience

A constructive mean-fi eld analysis of multi-population neural networks with random synaptic weights and stochastic inputs We deal with the problem of bridging the gap between two scales in neuronal modeling. At the fi rst (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-fi eld behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any fi nite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infi nite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confi rm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous. This is in effect for instance the case of cortical columns. However, obtaining the evolution equations of the effective mean-fi eld from microscopic dynamics is far from being evident. In simple physical models this can be achieved via the law of large numbers and the central limit theorem, provided that time correlations decrease suffi ciently fast. This type of approach has been generalized to such fi elds as quantum fi eld theory or non equilibrium statistical mechanics. To the best of our knowledge, the idea of applying mean-fi eld methods to neural networks dates back to Amari (Amari, 1972; Amari et al., 1977). In his approach, the author uses an assumption that he called the " local …


INTRODUCTION
Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity ("microscopic" scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50 μm to 1 mm, containing of the order of 100-100000 neurons belonging to a few different species. These columns have specifi c functions. For example, in the visual cortex V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the collective dynamics of many interacting neurons and resulting in a mesoscopic signal. The description of this collective dynamics requires models which are different from individual neurons models. In particular, if the accurate description of one neuron requires "m" parameters (such as sodium, potassium, calcium conductances, membrane capacitance, etc…), it is not necessarily true that an accurate mesoscopic description of an assembly of N neurons requires Nm parameters. Indeed, when N is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-fi eld, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective On the opposite, these effects do not appear in the neural network considered in Sompolinsky et al. (1998) because the synaptic weights are independent (Cessac, 1995) (and especially non symmetric, in opposition to spin-glasses). In this case, the Amari approach and the dynamic mean-fi eld approach lead to the same mean-fi eld equations. Later on, the mean-fi eld equations derived by Sompolinsky and Zippelius (1982) for spin-glasses were rigorously obtained by Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995Guionnet, , 1997Guionnet, 1997). The application of their method to a discrete time version of the neural network considered in Sompolinsky et al. (1998) and in Molgedey et al. (1992) was done by Moynot and Samuelides (2002).
Mean-fi eld methods are often used in the neural network community but there are only a few rigorous results using the dynamic mean-fi eld method. The main advantage of dynamic mean-fi eld techniques is that they allow one to consider neural networks where synaptic weights are random (and independent). The mean-fi eld approach allows one to state general and generic results about the dynamics as a function of the statistical parameters controlling the probability distribution of the synaptic weights (Samuelides and Cessac, 2007). It does not only provide the evolution of the mean activity of the network but, because it is an equation on the law of the mean-fi eld, it also provides information on the fl uctuations around the mean and their correlations. These correlations are of crucial importance as revealed in the paper by Sompolinsky et al. (1998). Indeed, in their work, the analysis of correlations allows them to discriminate between two distinct regimes: a dynamics with a stable fi xed point and a chaotic dynamics, while the mean is identically 0 in the two regimes.
However, this approach has also several drawbacks explaining why it is so seldom used. First, this method uses a generating function approach that requires heavy computations and some "art" for obtaining the mean-fi eld equations. Second, it is hard to generalize to models including several populations. Finally, dynamic mean-fi eld equations are usually supposed to characterize in fi ne a stationary process. It is then natural to search for stationary solutions. This considerably simplifi es the dynamic mean-fi eld equations by reducing them to a set of differential equations (see Section "Numerical Experiments") but the price to pay is the unavoidable occurrence in the equations of a non free parameter, the initial condition, that can only be characterized through the investigation of the nonstationary case.
Hence it is not clear whether such a stationary solution exists, and, if it is the case, how to characterize it. To the best of our knowledge, this diffi cult question has only been investigated for neural networks in one paper by Crisanti et al. (1990). Different alternative approaches have been used to get a meanfi eld description of a given neural network and to fi nd its solutions. In the neuroscience community, a static mean-fi eld study of multi-population network activity was developed by Treves (1993). This author did not consider external inputs but incorporated dynamical synaptic currents and adaptation effects. His analysis was completed in Abbott and Van Vreeswijk (1993), where the authors considered a unique population of nonlinear oscillators subject to a noisy input current. They proved, using a stationary Fokker-Planck formalism, the stability of an asynchronous state in the network. Later on, Gerstner (1995) built a new approach to characterize the mean-fi eld dynamics for the Spike Response Model, via the introduction of suitable kernels propagating the collective activity of a neural population in time. Brunel and Hakim (1999) considered a network composed of integrate-and-fi re neurons connected with constant synaptic weights. In the case of sparse connectivity, stationarity, and considering a regime where individual neurons emit spikes at low rate, they were able to study analytically the dynamics of the network and to show that the network exhibited a sharp transition between a stationary regime and a regime of fast collective oscillations weakly synchronized. Their approach was based on a perturbative analysis of the Fokker-Planck equation. A similar formalism was used in Mattia and Del Giudice (2002) which, when complemented with self-consistency equations, resulted in the dynamical description of the mean-fi eld equations of the network, and was extended to a multi-population network.
Finally, Chizhov and Graham (2007) have recently proposed a new method based on a population density approach allowing to characterize the mesoscopic behavior of neuron populations in conductance-based models. We shortly discuss their approach and compare it to ours in Section "Discussion".
In the present paper, we investigate the problem of deriving the equations of evolution of neural masses at mesoscopic scales from neurons dynamics, using a new and rigorous approach based on stochastic analysis.
The article is organized as follows. In Section "Mean-Field Equations for Multi-Populations Neural Network Models" we derive from fi rst principles the equations relating the membrane potential of each of a set of neurons as function of the external injected current and noise and of the shapes and intensities of the postsynaptic potentials in the case where these shapes depend only on the postsynaptic neuron (the so-called voltage-based model).
Assuming that the shapes of the postsynaptic potentials can be described by linear (possibly time-dependent) differential equations we express the dynamics of the neurons as a set of stochastic differential equations. Assuming that the synaptic connectivities between neurons satisfy statistical relationship only depending on the population they belong to, we obtain the mean-fi eld equations summarizing the interactions of the P populations in the limit where the number of neurons tend to infi nity. These equations can be derived in several ways, either heuristically following the lines of Amari (Amari, 1972;Amari et al., 1977), Sompolinsky (Crisanti et al., 1990;Sompolinsky et al., 1998), and Cessac (Cessac, 1995;Samuelides and Cessac, 2007), or rigorously as in the work of Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995Guionnet, , 1997Guionnet, 1997). The purpose of this article is not the derivation of these mean-fi eld equations but to prove that they are well-posed and to provide an algorithm for computing their solution. Before we do this we provide the reader with two important examples of such mean-fi eld equations. The fi rst example is what we call the simple model, a straightforward generalization of the case studied by Amari and Sompolinsky. The second example is a neuronal assembly model, or neural mass model, as introduced by Freeman (1975) and exemplifi ed in Jansen and Rit's (1995) cortical column model.
In Section "Existence and Uniqueness of Solutions in Finite Time" we consider the problem of solutions over a fi nite time interval [t 0 , T]. We prove, under some mild assumptions, the existence and uniqueness of a solution of the dynamic mean-fi eld equations given an initial condition at time t 0 . The proof consists in showing that a nonlinear equation defi ned on the set of multidimensional Gaussian random processes defi ned on [t 0 , T] has a fi xed point. We extend this proof in Section "Existence and Uniqueness of Stationary Solutions" to the case of stationary solutions over the time interval [−∞, T] for the simple model. Both proofs are constructive and provide an algorithm for computing numerically the solutions of the mean-fi eld equations. We then study in Section "Numerical Experiments" the complexity and the convergence rate of this algorithm and put it to good use: We fi rst compare our numerical results to the theoretical results of Sompolinsky and colleagues (Crisanti et al., 1990;Sompolinsky et al., 1998). We then provide an example of numerical experiments in the case of two populations of neurons where the role of the mean-fi eld fl uctuations is emphasized.
Along the paper we introduce several constants. To help the reader we have collected in Table 1 of Appendix D, the most important ones and the place where they are defi ned in the text.

MEAN-FIELD EQUATIONS FOR MULTI-POPULATIONS NEURAL NETWORK MODELS
In this section we introduce the classical neural mass models and compute the related mean-fi eld equations they satisfy in the limit of an infi nite number of neurons.

General framework
We consider a network composed of N neurons indexed by i ∈ {1,…,N} belonging to P populations indexed by α ∈ {1,…,P} such as those shown in Figure 1. Let N α be the number of neurons in population α. We have N N P = ∑ α= α 1 . We defi ne the population which the neuron i, i = 1,…,N belongs to.
We consider that each neuron i is described by its membrane potential V i (t), and the related instantaneous fi ring rate is deduced from it through a relation of the form ν i (t) = S i (V i (t)) (Dayan and Abbott, 2001;Gerstner and Kistler, 2002), where S i is a sigmoidal function.
A single action potential from neuron j generates a postsynaptic potential PSP ij (u) on the postsynaptic neuron i, where u is the time elapsed after the spike is received. We neglect the delays due to the distance traveled down the axon by the spikes.
Assuming that the postsynaptic potentials sum linearly, the average membrane potential of neuron i is where the sum is taken over the arrival times of the spikes produced by the neurons j after some reference time t 0 . The number of spikes arriving between t and t + dt is ν j (t)dt. Therefore we have The PSP ij s can depend on several variables in order to account for instance for adaptation or learning.
We now make the simplifying assumption that the shape of the postsynaptic potential PSP ij only depends on the postsynaptic population, which corresponds to the voltage-based models in Ermentrout's (1998) classifi cation.
The voltage-based model. The assumption, made in Hopfi eld (1984), is that the postsynaptic potential has the same shape no matter which presynaptic population caused it, the sign and amplitude may vary though. This leads to the relation PSP ij ij i t J g t ( ) ( ). = g i represents the unweighted shape (called a g-shape) of the postsynaptic potentials and J ij is the strength of the postsynaptic potentials elicited by neuron j on neuron i. At this stage of the discussion, these weights are supposed to be deterministic. This is refl ected in the notation J ij which indicates an average value 1 . From Eq. 1 we have So far we have only considered the synaptic inputs to the neurons. We enrich our model by assuming that the neuron i receives also an external current density composed of a deterministic part, noted I i (t), and a stochastic part, noted n i (t), so that We assume, and this is essential for deriving the mean-fi eld equations below, that all indexed quantities depend only upon the P populations of neurons (see Defi nition 1), i.e., where x ∼ y indicates that the two random variables x and y have the same probability distribution. In other words, all neurons in the same population are described by identical equations (in law). The g-shapes describe the shape of the postsynaptic potentials and can reasonably well be approximated by smooth functions.
In detail we assume that g α , α = 1,…,P is the Green function of a linear differential equation of order k, i.e., satisfi es where δ(t) is the Dirac delta function. The functions b lα (t), l = 0,…,k, α = 1,…,P, are assumed to be continuous. We also assume for simplicity that for all t ∈ ‫,ޒ‬ α = 1,…,P. We note D k α the corresponding differential operator: Applying D k α to both sides of Eq. 3, using Eq. 7 and the fact that ν j (s) = S j (V j (s)), we obtain a kth-order differential equation for V i With a slight abuse of notation, we split the sum with respect to j into P sums: We classically turn the kth-order differential Eq. 8 into a kdimensional system of coupled fi rst-order differential equations (we divided both sides of the last equation by c i , see Eq. 6): A well-known example of g-shapes, see Section "Example II: The model of Jansen and Rit" below or Gerstner and Kistler (2002) where Y(t) is the Heaviside function. This is an exponentially decaying postsynaptic potential corresponding to This is a somewhat smoother function corresponding to in Eq. 5.
The dynamics. We modify the Eq. 9 by perturbing the fi rst k − 1 equations with Brownian noise and assuming that n i (t) is white noise. This has the effect that the quantities that appear in Eq. 9 are not anymore the derivatives up to order k − 1 of V i . This becomes true again only in the limit where the added Brownian noise is null. This may seem artifi cial at fi rst glance but (1) it is a technical assumption that is necessary in the proofs of the well-posedness of the mean-fi eld equations, see Assumption 1 below, and (2) it generates a rich class of external stochastic input, as shown below. With this in mind, the Eq. 9 now read W li (t), l = 0,…,k − 1, i = 1,…,N, are kN independent standard Brownian processes. Because we want the neurons in the same class to be essentially identical we also assume that the functions f li (t) that control the amount of noise on each derivative satisfy f li (t) = f lp(i) (t), l = 0,…,k − 1, i = 1,…,N Note that in the limit f lα (t) = 0 for l = 0,…,k − 1 and α = 1,…,P, the components V li (t) of the vector V i t ( ) are the derivatives of the membrane potential V i , for l = 0,…,k − 1 and the Eq. 12 turn into Eq. 9. The system of differential Eq. 12 implies that the class of admissible external stochastic input n i (t) to the neuron i are Brownian noise integrated through the fi lter of the synapse, i.e., involving the lth primitives of the Brownian motion for l ≤ k.
We now introduce the k − 1 N-dimensional vectors V l (t) = [V l1 ,…,V lN ] T , l = 1,…,k − 1 of the lth-order derivative (in the limit of f lp(i) (t) = 0) of V(t), and concatenate them with V(t) into the Nk-dimensional vector The N-neurons network is described by the Nk-dimensional vector V( ) t . By defi nition the lth N-dimensional component V l of V is equal to V l . In the limit f lα (t) = 0 we have We next write the equations governing the time variation of the k N-dimensional sub-vectors of V( ) t , i.e., the derivatives of order 0,…,k − 1 of V(t). These are vector versions of Eq. 12. We write where f lα (t), α = 1,…,P is repeated N α times, and the W l (t), l = 0,…,k − 2, are k − 1 N-dimensional independent standard Brownian processes. The equation governing the (k − 1)th differential of the membrane potential has a linear part determined by the differential operators D k α , α = 1,…,P and accounts for the external inputs (deterministic and stochastic) and the activity of the neighbors. We note L(t) the N × Nk matrix describing the relation between the neurons membrane potentials and their derivatives up to the order k − 1 and the (k − 1)th derivative of V. This matrix is defi ned as the concatenation of the k N × N diagonal matrixes We have: where W k−1 (t) is an N-dimensional standard Brownian process independent of W l (t), l = 0,…,k − 2. The coordinates of the N-dimensional vector I(t) are the external deterministic input currents, We defi ne where Id N is the N × N identity matrix and 0 N × N the N × N null matrix. We also defi ne the two kN-dimensional vectors: where 0 N is the N-dimensional null vector. Combining Eqs. 14 and 15 the full equation satisfi ed by V can be written: where the kN × kN matrix F(t) is equal to diag(F 0 ,…,F k−1 ) and W t is an kN-dimensional standard Brownian process.

THE MEAN-FIELD EQUATIONS
One of the central goals of this paper is to analyze what happens when we let the total number N of neurons grow to infi nity. Can we "summarize" the kN equations (Eq. 17) with a smaller number of equations that would account for the populations activity? We show that the answer to this question is yes and that the populations activity can indeed be represented by P stochastic differential equations of order k. Despite the fact that their solutions are Gaussian processes, these equations turn out to be quite complicated because these processes are non-Markovian. We assume that the proportions of neurons in each population are nontrivial, i.e.: If it were not the case the corresponding population would not affect the global behavior of the system, would not contribute to the mean-fi eld equation, and could be neglected.

General derivation of the mean-fi eld equation
When investigating the structure of such mesoscopic neural assemblies as cortical columns, experimentalists are able to provide the average value J ij of the synaptic effi cacy J ij of neural population j to population i. These values are obviously subject to some uncertainty which can be modeled as Gaussian random variables. We also impose that the distribution of the J ij s depends only on the population pair α = p(i), β = p(j), and on the total number of neurons N β of population β: We also make the additional assumption that the J ij 's are independent. This is a reasonable assumption as far as modeling cortical columns from experimental data is concerned. Indeed, it is already diffi cult for experimentalists to provide the average value of the synaptic strength J αβ from population β to population α and to estimate the corresponding error bars (σ αβ ), but measuring synaptic effi cacies correlations in a large assembly of neurons seems currently out of reach. Though, it is known that synaptic weights are indeed correlated (e.g., via synaptic plasticity mechanisms), these correlations are built by dynamics via a complex interwoven evolution between neurons and synapses dynamics and postulating the form of synaptic weights correlations requires, on theoretical grounds, a detailed investigation of the whole history of neuronssynapses dynamics.
Let us now discuss the scaling form of the probability distribution (Eq. 18) of the J ij 's, namely the division by N β for the mean and variance of the Gaussian distribution. This scaling ensures that the "local interaction fi eld" ∑ = j N ij j J S V t 1 β ( ( )) summarizing the effects of the neurons in population β on neuron i, has a mean and variance which do not depend on N β and is only controlled by the phenomenological parameters J αβ , σ αβ . We are interested in the limit law when N → ∞ of the N-dimensional vector V defi ned in Eq. 3 under the joint law of the connectivities and the Brownian motions, which we call the mean-fi eld limit. This law can be described by a set of P equations, the mean-fi eld equations. As mentioned in the introduction these equations can be derived in several ways, either heuristically as in the work of Amari (Amari, 1972;Amari et al., 1977), Sompolinsky (Crisanti et al., 1990;Sompolinsky et al., 1998), and Cessac (Cessac, 1995;Samuelides and Cessac, 2007), or rigorously as in the work of Ben-Arous and Guionnet (Ben-Arous and Guionnet, 1995Guionnet, , 1997Guionnet, 1997) . We derive them here in a pedestrian way, prove that they are well-posed, and provide an algorithm for computing their solution.
The effective description of the network population by population is possible because the neurons in each population are interchangeable, i.e., have the same probability distribution under the joint law of the multidimensional Brownian motion and the connectivity weights. This is the case because of the relations (Eqs. 4 and 16) which imply the form of Eq. 17.

The mean ideas of dynamic mean-fi eld equations.
Before diving into the mathematical developments let us comment briefl y what are the basic ideas and conclusions of the mean-fi eld approach. Following Eq. 8, the evolution of the membrane potential of some neuron i in population α is given by: Using the assumption that S i , I i , n i depend only on neuron population, this gives: where we have introduced the local interaction fi eld , summarizing the effects of neurons in population β on neuron i and whose probability distribution only depends on the pre-and postsynaptic populations α and β.
In the simplest situation where the J ij 's have no fl uctuations (σ αβ = 0) this fi eld reads η Φ β α ββ is the frequency rate of neurons in population β, averaged over this population. Introducing in the same way the average membrane potential in population β, This equation resembles very much Eq. 19 if one makes the following reasoning: "Since Φ β (V (t) is the frequency rate of neurons in population β, averaged over this population, and since, for one neuron, the frequency rate is ν i (t) = S i (V i (t)) let us write Φ β (V(t)) = S β (V β (t))". This leads to: which has exactly the same form as Eq. 19 but at the level of a neuron population. Equations such as (22), which are obtained via a very strong assumption: are typically those obtained by Jansen and Rit (1995). Surprisingly, they are correct and can be rigorously derived, as discussed below, provided σ αβ = 0. However, they cannot remain true, as soon as the synaptic weights fl uctuate. Indeed, the transition from Eqs. 19 to 22 corresponds to a projection from a NP-dimensional space to a P-dimensional one, which holds because the NP × NP dimensional synaptic weights matrix has in fact only P linearly independent rows. This does not hold anymore if the J ij 's are random and the synaptic weights matrix has generically full rank. Moreover, the effects of the nonlinear dynamics on the synaptic weights variations about their mean, is not small even if the σ αβ s are and the real trajectories of Eq. 19 can depart strongly from the trajectories of Eq. 22. This is the main message of this paper.
To fi nish this qualitative description, let us say in a few words what happens to the mean-fi eld equations when σ αβ ≠ 0. We show below that the local interaction fi elds η αβ (V (t)) becomes, in the limit N β → ∞, a time-dependent Gaussian fi eld U αβ (t). One of the main results is that this fi eld is non-Markovian, i.e., it integrates the whole history, via the synaptic responses g which are convolution products. Despite the fact that the evolution equation for the membrane potential averaged over a population writes in a very simple form: it hides a real diffi culty, since U αβ (t) depends on the whole past. Therefore, the introduction of synaptic weights variability leads to a drastic change in neural mass models, as we now develop.
The Mean-Field equations. We note C([t 0 , T], ‫ޒ‬ P ) (respectively C((−∞, T], ‫ޒ‬ P )) the set of continuous functions from the real interval [t 0 , T] (respectively (−∞, T]) to ‫ޒ‬ P . By assigning a probability to subsets of such functions, a continuous stochastic process X defi nes a positive measure of unit mass on C([t 0 , T], ‫ޒ‬ P ) (respectively C((−∞, T], ‫ޒ‬ P )). This set of positive measures of unit mass is noted M 1 We now defi ne a process of particular importance for describing the limit process: the effective interaction process.

Defi nition 2. (Effective interaction process). Let
In order to construct the solution of the mean-fi eld equations (see Section "Existence and Uniqueness of Solutions in Finite Time") we will need more explicit expressions for m t X β ( ) and Δ β X t s ( , ) which we obtain in the next proposition.
Proposition 1. Let μ(t) = ‫[ޅ‬X t ] be the mean of the process X and C(t, s) = ‫([ޅ‬X t − μ(t)) (X s − μ(s)) T ] be its covariance matrix. The vectors m X (t) and Δ X (t, s) that appear in the defi nition of the effective interaction process U X are defi ned by the following expressions: and where Dx dx is the probability density of a 0-mean, unit variance, Gaussian variable.
Proof. The results follow immediately by a change of variable from the fact that X β (t) is a univariate Gaussian random variable of mean μ β (t) and variance C ββ (t, t) and the pair (X β (t), X β (s)) is bivariate Gaussian random variable with mean (μ β (t), μ β (s)) and covariance matrix Choose P neurons i 1 ,…,i P , one in each population (neuron i α belongs to the population α). We defi ne the kP-dimensional vector V ( ) N t ( ) by choosing, in each of the k N-dimensional components … V l t l k ( ), , , = − 0 1 , of the vector V( ) t defi ned in Eq. 13 the coordinates of indexes i 1 ,…,i P . Then it can be shown, using either a heuristic argument or large deviations techniques (see Appendix A), that the sequence of kP-dimensional processes ( ) solution of the following mean-fi eld equation: The P × P matrixes B l (t), l = 0,…,k − 1 are, with a slight abuse of notations, equal to diag(b l1 (t),…,b lP (t)). ( ) W t t t ≥ 0 is a kP-dimensional standard Brownian process. U V has the law of the P-dimensional effective interaction vector associated to the vector V (fi rst Pdimensional component of V) and is statistically independent of the external noise ( ) W t t t ≥ 0 and of the initial condition V( ) t 0 (when t 0 > −∞): We have used for the matrixes F l (t), l = 0,…,k − 1 the same abuse of notations as for the matrixes B l (t), i.e., The process (U t t t V ) ≥ 0 is a P × P-dimensional process and is applied, as a matrix, to the P-dimensional vector 1 with all coordinates equal to 1, resulting in the P-dimensional vector U t V ⋅1 whose mean and covariance function can be readily obtained from Defi nition 2: We have of course Equations (28) are formally very similar to Eq. 17 but there are some very important differences. The fi rst ones are of dimension kP whereas the second are of dimension kN which grows arbitrarily large when N → ∞. The interaction term of the second, J V ⋅ S t ( ( )), is simply the synaptic weight matrix applied to the activities of the N neurons at time t. The interaction term of the fi rst equation, U t V , though innocuous looking, is in fact quite complex (see Eqs. 29 and 30). In fact the stochastic process U t V , putative solution of Eq. 28, is in general non-Markovian. To proceed further we formally integrate the equation using the fl ow, or resolvent, of the Eq. 28, noted Φ L (t, t 0 ) (see Appendix B), and we obtain, since we assumed L continuous, an implicit representation of V( ) t : We now introduce for future reference a simpler model which is quite frequently used in the description on neural networks and has been formally analyzed by Sompolinsky and colleagues (Crisanti et al., 1990;Sompolinsky et al., 1998) in the case of one population (P = 1).

Example I: The Simple Model
In the Simple Model, each neuron membrane potential decreases exponentially to its rest value if it receives no input, with a time constant τ α depending only on the population. In other words, we assume that the g-shape describing the shape of the PSPs is Eq. 10, with K = 1 for simplicity. The noise is modeled by an independent Brownian process per neuron whose standard deviation is the same for all neurons belonging to a given population.
Hence the dynamics of a given neuron i from population α of the network reads: This is a special case of Eq. 12 where k = 1, b 0α (t) = 1/τ α , b 1α (t) = 1 for α = 1,…,P. The corresponding mean-fi eld equation reads: where the processes (W α (t)) t ≥ t 0 are independent standard Brownian motions, is the effective interaction term, see Defi nition 2. This is a special case of Eq. 28 with L = diag( 1 1 1 τ τ , , … P ), and F = diag(f 1 ,…,f P ). Taking the expected value of both sides of Eq. 33 and using we obtain Eq. 26 that the mean μ α (t) of V α (t) satisfi es the differential equation which is precisely the "naive" mean-fi eld equation (Eq. 22) obtained with the assumption (Eq. 23). We see that Eq. 22 are indeed correct, provided that C ββ (t, t) = 0, ∀t ≥ t 0 .
Equation 33 can be formally integrated implicitly and we obtain the following integral representation of the process V α (t): where t 0 is the initial time. It is an implicit equation on the probability distribution of V(t), a special case of (Eq. 31), with If σ αβ = 0 and if s α = 0 then C αα (t, t) = 0, ∀t ≥ t 0 is a solution of this equation. Thus, mean-fi eld equations for the simple model reduce to the naive mean-fi eld Eq. 22 in this case. This conclusion extends as well to all models of synaptic responses, ruled by Eq. 5.
However, the equation of C αα (t, t) shows that, in the general case, in order to solve the differential equation for μ α (t), we need to know the whole past of the process V. This exemplifi es a previous statement on the non-Markovian nature of the solution of the mean-fi eld equations.

Example II: The model of Jansen and Rit
One of the motivations of this study is to characterize the global behavior of an assembly of neurons in particular to get a better understanding of recordings of cortical signals like EEG or MEG. One of the classical models of neural masses is Jansen and Rit's mass model (Jansen and Rit, 1995), in short the JR model (see Figure 2).
The model features a population of pyramidal neurons that receives inhibitory inputs from local inter-neurons, excitatory feedbacks, and excitatory inputs from neighboring cortical units and sub-cortical structures such as the thalamus. The excitatory input is represented by an external fi ring rate that has a deterministic part I 1 (t) accounting for specifi c activity of other cortical units and a stochastic part n 1 (t) accounting for a non specifi c background activity. We formally consider that the excitatory feedback of the pyramidal neurons is a new neural population, making the number P of populations equal to 3. We also represent the external inputs to the other two populations by the sum of a deterministic part I j (t) and a stochastic part n j (t), j = 2, 3, see Figure 2.
In the model introduced originally by Jansen and Rit, the connectivity weights were assumed to be constant, i.e., equal to their mean value. Nevertheless, there exists a variability of these coeffi cients, and as we show in the sequel, the effect of the connectivity variability impacts the solution at the level of the neural mass. Statistical properties of the connectivities have been studied in details for instance in (Braitenberg and Schüz, 1998 We consider a network of N neurons, N α , α = 1, 2, 3 belonging to population α. We index by 1 (respectively 2, and 3) the pyramidal (respectively excitatory feedback, inhibitory interneuron) populations. We choose in each population a particular neuron indexed by i α , α = 1, 2, 3. The evolution equations of the network can be written for instance in terms of the potentials V i1 , V i2 and V i3 labeled in Figure 2 and these equations read: In the mean-fi eld limit, denoting by V α , α = 1, 2, 3 the average membrance potential of each class, we obtain the following equations: U αβ α β 1 2 3 is the effective interaction process associated with this problem, i.e., a Gaussian process of mean: All other mean values correspond to the non-interacting populations and are equal to 0. The covariance matrix can be deduced from Eq. 25: This model is a voltage-based model in the sense of Ermentrout (1998). Let us now instantiate the synaptic dynamics and compare the mean-fi eld equations with Jansen's population equations 2 (sometimes improperly called also mean-fi eld equations).
The simplest model of synaptic integration is a fi rst-order integration, which yields exponentially decaying postsynaptic potentials: Note that this is exactly Eq. 10. The corresponding g-shape satisfi es the following fi rst-order differential equation In this equation τ is the time constant of the synaptic integration and K the synaptic effi ciency. The coeffi cients K and τ are the same for the pyramidal and the excitatory feedback population (characteristic of the pyramidal neurons and defi ning the g-shape g 1 ), and different for the inhibitory population (defi ning the g-shape g 3 ). In the pyramidal or excitatory (respectively the inhibitory) case we have K = K 1 , τ = τ 1 (respectively K = K 3 , τ = τ 3 ). Finally, the sigmoid functions S is given by where ν max is the maximum fi ring rate, and v 0 is a voltage reference. With this synaptic dynamics we obtain the fi rst-order Jansen and Rit's equation: The "original" Jansen and Rit's equation (Grimbert and Faugeras, 2006;Jansen and Rit, 1995) amount considering only the mean of the process V and assuming that , that the expectation commutes with the sigmoidal function S. This is a very strong assumption, and that the fl uctuations of the solutions of the mean-fi eld equation around the mean imply that the sigmoid cannot be considered as linear in the general case.
A higher order model was introduced by van Rotterdam et al. (1982) to better account for the synaptic integration and to better reproduce the characteristics of real postsynaptic potentials. In this model the g-shapes satisfy a second-order differential equation: We recognize the g-shape defi ned by Eq. 11 solution of the second-order differential equation y t y t y t K t ( ) ( ) ( ) ( Here again, going from the mean-fi eld Eq. 37 to the original Jansen and Rit's neural mass model consists in studying the equation of the mean of the process given by Eq. 37 and commuting the sigmoidal function with the expectation. Note that the introduction of higher order synaptic integrations results in richer behaviors. For instance, Grimbert and Faugeras (2006) showed that some bifurcations can appear in the secondorder JR model giving rise to epileptic like oscillations and alpha activity, that do not appear in the fi rst-order model.

EXISTENCE AND UNIQUENESS OF SOLUTIONS IN FINITE TIME
The mean-fi eld equation (Eq. 31) is an implicit equation of the stochastic process (V(t)) t ≥ t 0 . We prove in this section that under some mild assumptions this implicit equation has a unique solution. These assumptions are the following. This solution is the fi xed point in the set M 1 ‫ޒ‬ of kP-dimensional processes of an equation that we will defi ne from the mean-fi eld equations. We will construct a sequence of Gaussian processes and prove that it converges in distribution toward this fi xed point.
We fi rst recall some results on the convergence of random variables and stochastic processes.

CONVERGENCE OF GAUSSIAN PROCESSES
We recall the following result from Bogachev (1998) which formalizes the intuition that a sequence of Gaussian processes converges toward a Gaussian process if and only if the means and covariance functions converge. In fact in order for this to be true, it is only necessary to add one more condition, namely that the corresponding sequence of measures (elements of M 1 ‫ޒ‬ ) do not have "any mass at infi nity". This property is called uniform tightness (Billingsley, 1999). More precisely we have Defi nition 3. (Uniform tightness). Let { } X n n=1 ∞ be a sequence of kPdimensional processes defi ned on [t 0 , T] and P n be the associated ‫ޒ‬ is called uniformly tight if and only if for all ε > 0 there exists a compact set K of C([t 0 , T], ‫ޒ‬ kP ) such that P n (K) > 1 − ε, n ≥ 1.
1 be a sequence of kP-dimensional Gaussian processes defi ned on [t 0 , T] or on an unbounded interval 4 of ‫.ޒ‬ The sequence converges to a Gaussian process X if and only if the following three conditions are satisfi ed: • The sequence { } X n n=1 ∞ is uniformly tight. • The sequence μ n (t) of the mean functions converges for the uniform norm. • The sequence C n of the covariance operators converges for the uniform norm.
We now, as advertised, defi ne such a sequence of Gaussian processes.
Let us fi x Z 0 , a kP-dimensional Gaussian random variable, independent of the Brownian and of the process ((X) t )t ∈ [t 0 ,T].

Defi nition 4. Let X be an element of
where U S X and I( ) s are defi ned 5 in Section "Mean-Field Equations for Multi-Populations Neural Network Models".
Note that, by Defi nition 2 the random process (F k (X)) t ∈ [t 0 , T] , k ≥ 1 is the sum of a deterministic function (defi ned by the external current) and three independent random processes defi ned by Z 0 , the interaction between neurons, and the external noise. These three processes being Gaussian processes, so is ( Let X be a given stochastic process of M 1 In the remaining of this section we show that the sequence of processes ∞ converges in distribution toward the unique fi xed-point Y of F k which is also the unique solution of the meanfi eld equation (Eq. 31).

EXISTENCE AND UNIQUENESS OF A SOLUTION FOR THE MEAN-FIELD EQUATIONS
The following upper and lower bounds are used in the sequel. Lemma 1. Consider the Gaussian process (( ) ) .
[ , ] U X is defi ned in Sections "The Mean-Field Equations" and "Introduction" is the P-dimensional vector with all coordinates equal to 1. We have for all t 0 ≤ t ≤ T. The maximum eigenvalue of its covariance matrix is upperbounded by σ σ The proof of existence and uniqueness of solution, and of the convergence of the sequence (Eq. 38) is in two main steps. We fi rst prove that the sequence of Gaussian processes { ( )} ( ) F k n n X =0 ∞ , k ≥ 1 is uniformly tight by proving that it satisfi es Kolmogorov's criterion for tightness. This takes care of condition 1 in Theorem 1. We then prove that the sequences of the mean functions and covariance operators are Cauchy sequences for the uniform norms, taking care of conditions 2 and 3.

Uniform tightness
We fi rst recall the following theorem due to Kolmogorov (Kushner, 1984, Chapter 4.1).
then the sequence is uniformly tight.
Using this theorem we prove that the sequence { ( )} ( ) F k n n X =0 ∞ , k ≥ 1 satisfi es Kolmogorov's criterion for β = 4 and α ≥ 1. The reason for choosing β = 4 is that, heuristically, dW Ӎ (dt) 1/2 . Therefore in order to upperbound ‫[ޅ‬ ( ) ( ) ] X t X s n n − β by a power of | t − s | ≥ 2 (hence strictly larger than 1) we need to raise X t X s n n ( ) ( ) − to a power at least equal to 4. The proof itself is technical and uses standard inequalities (Cauchy-Schwarz's and Jensen's), properties of Gaussian integrals, elementary properties of the stochastic integral, and Lemma 1. It also uses the fact that the input current is bounded, i.e., that max , , sup Proof. We do the proof for k = 1, the case k > 1 is similar. If we assume that n ≥ 1 and s < t we can rewrite the difference The righthand side is the sum of seven terms and therefore (Cauchy-Schwarz inequality): Because Φ L (t, t 0 ) − Φ L (s, t 0 ) ≤ |t − s| L we see that all terms in the righthand side of the inequality but the second one involving the Brownian motion are of the order of (t − s) 2 . We raise again both sides to the second power, use the Cauchy-Schwarz inequality, and take the expected value: Remember that U 1 is a P-dimensional diagonal Gaussian process, noted Y u in the sequel, therefore: The second-order moments are upperbounded by some regular function of μ and σ max (defi ned in Lemma 1) and, because of the properties of Gaussian integrals, so are the fourth-order moments.
We now defi ne B(u) = Φ L (s, u)F(u) and evaluate for all i, j (property of the It integral), the last term is the sum of only three types of terms: 1. If j 1 = k 1 = j 2 = k 2 we defi ne 2. If j 1 = k 1 and j 2 = k 2 but 1 ≠ j 2 we defi ne , which is equal, because of the independence of W u j 1 and W u j 2 to 3. Finally, if j 1 = j 2 and k 1 = k 2 but j 1 k 1 we defi ne

The mean and covariance sequences are Cauchy sequences
Let us note μ n (t) [respectively C n (t, s)] the mean (respectively the covariance matrix) function of X n = F k (X n−1 ), n ≥ 1. We have: Note that the kP × kP covariance matrix Cov( has only one nonzero P × P block: According to Defi nition 2 we have , ) is given by Eq. 27 and Dx is defi ned in Proposition 1.
In order to prove our main result, that the two sequences of functions (μ n ) and (C n ) are uniformly convergent, we require the following four lemmas that we state without proofs, the proofs being found in Appendixes E-H. The fi rst lemma gives a uniform (i.e., independent of n ≥ 2 and α = 1,…,kP) strictly positive lowerbound for C t t n αα ( , ). In what follows we use the following notation: Let C be a symmetric positive defi nite matrix, we note λ min C its smallest eigenvalue.
Lemma 2. The following uppperbounds are valid for all n ≥ 1 and all s, t ∈ [t 0 , T].
where μ and σ max are defi ned in Lemma 1, λ max Γ is defi ned in Assumption 1. is the smallest eigenvalue of the positive symmetric defi nite covariance matrix ∑ Z 0 .
The second lemma also gives a uniform lowerbound for the expression C s s C t t C t s n n n αα αα αα ( , ) ( , ) ( , ) − 2 which appears in the defi nition of C n+1 through Eqs. 43 and 27. The crucial point is that this function is O(|t − s|) which is central in the proof of Lemma 5. The third lemma shows that an integral that appears in the proof of the uniform convergence of the sequences of functions (μ n ) and (C n ) is upperbounded by the nth term of a convergent series. With these lemmas in hand we prove Proposition 3. The proof is technical but its idea is very simple. We fi nd upperbounds for the matrix infi nite norm of C n+1 (t, s) − C n (t, s) and the infi nite norm of μ n+1 (t) − μ n (t) by applying the mean value Theorem and Lemmas 3 and 4 to the these norms. These upperbounds involve integrals of the infi nite norms of C n (t, s) − C n−1 (t, s) and μ n (t) − μ n−1 (t) and, through Lemma 4, one over the square root of the function θ. Proceeding recursively and using Lemma 5, one easily shows that the infi nite norms of C n+1 − C n and μ n+1 − μ n are upperbounded by the nth term of a convergent series from which it follows that the two sequences of functions are Cauchy sequences, hence convergent.
According to Eq. 27 we are led to consider the difference A n − A n−1 , where: We write next: The mean value theorem yields: where the constants k L and k L T are defi ned in Appendix B and A similar process applied to the mean values yields: where μ is defi ned in Lemma 1. We now use the mean value Theorem and Lemmas 3 and 4 to fi nd upperbounds for P n u v   [ , ] θ [ , ] ( ( , ) [ , ] u v dudv C u u C u u dudv   Proceeding recursively until we reach C 0 and μ 0 we obtain an upperbound for C n+1 (t, s) − C n (t, s) ∞ (respectively for μ n+1 (t) − μ n (t) ∞ ) which is the sum of <5 n terms each one being the product of k raised to a power ≤n, times 2μ max or 2Σ max (upperbounds for the norms of the mean vector and the covariance matrix defi ned in Lemma 2), times a 2n-dimensional integral I n given by where the functions ρ i (u i , v i ), i = 1,…,n are either equal to 1 or to 1/ ( , ) θ u v i i . According to Lemma 5, this integral is of the order of some positive constant raised to the power n divided by (n − 1)!. Hence the sum is less than some positive constant k raised to the power n divided by (n − 1)!. By taking the supremum with respect to t and s in [t 0 , T] we obtain the same result for C n+1 − C n ∞ (respectively for μ n+1 − μ n ∞ ). Since the series ∑ ≥ n k n n 1 ! is convergent, this implies that C n+p − C n ∞ (respectively μ n+p − μ n ∞ ) can be made arbitrarily small for large n and p and the sequence C n (respectively μ n ) is a Cauchy sequence.

Existence and uniqueness of a solution of the mean-fi eld equations
It is now easy to prove our main result, that the mean-fi eld equations (Eq. 31) or equivalently (Eq. 28) are well-posed, i.e., have a unique solution.
Theorem 4. For any nondegenerate kP-dimensional Gaussian random variable Z 0 , independent of the Brownian, and any initial process X such that X(t 0 ) = Z 0 , the map F k has a unique fi xed point in Proof. Since C([t 0 , T], ‫ޒ‬ kP ) (respectively C([t 0, T] 2 , ‫ޒ‬ kP × kP )) is a Banach space for the uniform norm, the Cauchy sequence μ n (respectively C n ) of Proposition 3 converges to an element μ of C([t 0 , T], ‫ޒ‬ kP ) (respectively an element C of C([t 0, T] 2 , ‫ޒ‬ kP × kP )). Therefore, according to Theorem 1, the sequence { ( )} ( ) F k n n X = ∞ 0 of Gaussian processes converges in law toward the Gaussian process Y with mean function μ and covariance function C. This process is clearly a fi xed point of F k .
Hence we know that there exists at least one fi xed point for the map F k . Assume there exist two distinct fi xed points Y 1 and Y 2 of F k with mean functions μ i and covariance functions C i , i = 1, 2, with the same initial condition. Since for all n ≥ 1 we have F k =1 2 the proof of Proposition 3 shows that μ μ 1 2 n n − ∞ (respectively C C n n 1 2 − ∞ ) is upperbounded by the product of a positive number a n (respectively b n ) with μ 1 − μ 2 ∞ ) (respectively with ( C 1 − C 2 ∞ ). Since lim n→∞ a n = lim n→∞ b n = 0 and μ μ i n i = , i = 1, 2 (respectively C C i n i = , i = 1, 2), this shows that μ 1 = μ 2 and C 1 = C 2 , hence the two Gaussian processes Y 1 and Y 2 are indistinguishable.

CONCLUSION
We have proved that for any nondegenerate Gaussian initial condition Z 0 there exists a unique solution of the mean-fi eld equations. The proof of Theorem 4 is constructive, and hence provides a way for computing the solution of the mean-fi eld equations by iterating the map F k defi ned in 3.2, starting from any initial process X satisfying X(t 0 ) = Z 0 , for instance a Gaussian process such as an Ornstein-Uhlenbeck process. We build upon these facts in Section "Numerical Experiments".
Note that the existence and uniqueness is true whatever the initial time t 0 and the fi nal time T.

EXISTENCE AND UNIQUENESS OF STATIONARY SOLUTIONS
So far, we have investigated the existence and uniqueness of solutions of the mean-fi eld equation for a given initial condition. We are now interested in investigating stationary solutions, which allow for some simplifi cations of the formalism.
A stationary solution is a solution whose probability distribution does not change under the fl ow of the equation. These solutions have been already investigated by several authors (see Brunel and Hakim, 1999;Sompolinsky et al., 1998). We propose a new framework to study and simulate these processes. Indeed we show in this section that under a certain contraction condition there exists a unique solution to the stationary problem. As in the previous section our proof is constructive and provides a way to simulate the solutions.
Remark. The long-time mean-fi eld description of a network is still a great endeavor in mathematics and statistical physics. In this section we formally take the mean-fi eld equation we obtained and let t 0 → −∞. This way we obtain an equation which is the limit of the mean-fi eld equation when t 0 → −∞. It means that we consider fi rst the limit N → ∞ and then t 0 → −∞. These two limits do not necessarily commute and there are known examples, for instance in spin-glasses, where they do not.
It is clear that in order to get stationary solutions, the stochastic system has to be autonomous. More precisely, we modify Assumption 1 as follows Assumption 2. (a) The matrixes L(t) and F(t), the input currents I(t) do not depend upon t. (b) The real parts of the eigenvalues of L are negative: for all eigenvalues λ of L. (c) The matrix F has full rank.
Under Assumption (a) of 2, the resolvent Φ L (t, s) is equal to e L(t−s) . Under Assumption (b) we only consider fi rst-order system since otherwise the matrix L has eigenvalues equal to 0. We now prove the following proposition. Proof. The fi rst point property follows from the fact that Re(λ) < −λ L for all eigenvalues λ of L. This assumption also implies that there exists a norm on ‫ޒ‬ P such that ) F W is well-defi ned ∀t ≤ T and is Gaussian with 0-mean. Its covariance matrix reads: ) . FF Let us assume for instance that t' < t and perform the change of variable Under the previous assumptions this matrix integral is defi ned when t 0 → −∞ (dominated convergence theorem) and we have which is a well-defi ned function of t' − t.
The second point of Proposition 4 guarantees the existence of process as the limit of the processes Y t t 0 when t 0 → −∞. This process is a stationary distribution of the equation: it is Gaussian, of mean ‫[ޅ‬X 0 (t)] = 0 and of covariance matrix Σ 0 is equal to Σ Y Y t t −∞ −∞ defi ned by Eq. 50 and which is independent of t. We call long term mean-fi eld equation (LTMFE) the implicit equation: where X 0 is the stationary process defi ned by Eq. 51 and where U V (t) is the effective interaction process introduced previously. We next defi ne the long term function F M Proof. We have already seen that the process X 0 is well-defi ned. . This latter term is clearly well-defi ned. We show that the memory term is also well-defi ned as a Gaussian random variable.
We write this term e e ds t s and consider the second factor. This random variable is Gaussian, its mean reads The integral defi ning the mean is well-defi ned because of Eq. 49 and the fact that the functions S β are bounded. A similar reasoning shows that the corresponding covariance matrix is well-defi ned. We can now prove the following proposition. Proposition 6. The mean vectors and the covariance matrices of the processes in the image of F stat are bounded.
Proof. Indeed, since ‫[ޅ‬X 0 (t)) = 0, we have: In a similar fashion the covariance matrices of the processes in the image of F stat are bounded. Indeed we have: Lemma 6. The set of stationary processes is invariant by F stat .
Proof. Since the processes in the image of F stat are Gaussian processes, one just needs to check that the mean of the process is constant in time and that its covariance matrix C(s, t) only depends on t − s.
Let Z be a stationary process and Y = F stat (Z). We denote by μ α Z the mean of the process Z α (t) and by C t s and hence does not depends on time. We note μ Z the mean vector of the stationary process U Z ·1. Similarly, its covariance function reads: ] μ Z was not a function of s. Similarly, we compute the covariance function and check that it can be written as a function of (t − s). Indeed, it reads: Cov ( for t > 0. We have proved the uniform tightness of the sequence of processes { ( )} F k n n X ( ) = ∞ 0 in Theorem 3. Hence, according to Kolmogorov's criterion for tightness, we just have to prove that the sequence of Gaussian random variables: is uniformly tight. Since it is a sequence of Gaussian random variables, it is suffi cient to prove that their means are bounded and their covariance matrices upperbounded to obtain that for any ε > 0 there exists a compact K ε such that for any n ∈ ‫,ގ‬ we have This is a consequence of Proposition 6 for the fi rst random variable and of the defi nition of X 0 for the second. By Kolmogorov's criterion the sequence of processes { stat is uniformly tight.
In order to apply Theorem 1 we need to prove that the sequences of covariance and mean functions are convergent. Unlike the case of t 0 fi nite, this is not always true. Indeed, to ensure existence and uniqueness of solutions in the stationary case, the parameters of the system have to satisfy a contraction condition, and Proposition 3 extends as follows.
Proposition 7. If λ L defi ned in Eq. 48 satisfi es the conditions (Eq. 53) defi ned in the proof, depending upon k C (defi ned in Eq. 45), k 0 , μ LT and Σ LT (defi ned in Proposition 6)then the sequences of covariance matrix functions C n (t, s) and of mean functions μ n (t), s, t in [t 0 , T] are Cauchy sequences for the uniform norms.
Proof. The proof follows that of Proposition 3 with a few modifications that we indicate. In establishing the equation corresponding to Eq. 44 we use the fact that Φ L (t, u) ∞ ≤ ke −λL(t−u) for some positive constant k and all u, t, u ≤ t. We therefore have: The rest of the proof proceeds the same way as in Proposition 3. Equations 46 and 47 become: Proceeding recursively until we reach C 0 and μ 0 we obtain an upperbound for C n+1 (t, s) − C n (t, s) ∞ (respectively for μ n+1 (t) − μ n (t) ∞ ) which is the sum of <5 n terms each one being the product of K n , times 2 μ LT or 2Σ LT , times a 2n-dimensional integral I n given by: Putting all these results together we obtain the following theorem of existence and uniqueness of solutions for the LTMFE: Proof. The proof is essentially similar to the one of Theorem 4. Indeed, the mean and the covariance matrixes converge since they are Cauchy sequences in the complete space of continuous functions equipped with the uniform norm. Using Theorem 1, we obtain that the sequence converges to a process Y which is necessarily a fi xed point of F stat . Hence we have existence of a fi xed point for F stat . The uniqueness comes from the results obtained in the proof of Proposition 7. The limiting process is necessarily stationary. Indeed, let X be a stationary process. Then for any n ∈ ‫,ގ‬ the process F stat (n) ( ) X will be stationary by the virtue of Lemma 6, and hence so will be the limiting process which is the only fi xed point of F stat .
Hence in the stationary case, the existence and uniqueness of a solution is not always ensured. For instance if the leaks are too small (i.e., when the time constants of the decay of the membrane potentials are too long) then the sequence can diverge or have multiple fi xed points.

SIMULATION ALGORITHM
Beyond the mathematical results, the framework that we introduced in the previous sections gives us a strategy to compute numerically the solutions of the dynamic mean-fi eld equations. Indeed, we proved in Section "Existence and Uniqueness of Solutions in Finite Time" that under very moderate assumptions on the covariance matrix of the noise, the iterations of the map F k starting from any initial condition converge to the solution of the mean-fi eld equations.
This convergence result gives us a direct way to compute numerically the solution of the mean-fi eld equations. Since we are dealing with Gaussian processes, determining the law of the iterates of the map F k amounts to computing its mean and covariance functions. In this section we describe our numerical algorithm in the case of the Simple Model of Section "Example I: The Simple Model".
Computing F k Let X be a P-dimensional Gaussian process of mean μ X = ( ( )) ... We fi x a time interval [t 0 = 0, T] and denote by Y the image of the process X under F 1 . In the case of the simple model, the covariance of Y is diagonal. Hence in this case the expressions we obtain in Section "Existence and Uniqueness of Solutions in Finite Time" simply read: ( ) using a standard discretization scheme of the integral, with a small time step compared with τ α and the characteristic time of variation of the input current I α . Alternatively, we can use the fact that μ α Y satisfi es the differential equation: and compute the solution using a Runge-Kutta algorithm (which is faster and more accurate). Note that, when all the standard deviations of the process X are null for all time t ∈ [0, T], we obtain a standard dynamical system. Nevertheless, in the general case, where: This algorithm enables us to compute H t s X αβ ( , ) for t > s. We deduce H t s X αβ ( , ) for t < s using the symmetry of this function. Finally, to get the values of H t s X αβ ( , ) for t = s, we use the symmetry property of this function and get: These numerical schemes provide an effi cient way for computing the mean and the covariance functions of the Gaussian process F 1 (X) (hence its probability distribution) knowing the law of the Gaussian process X. The algorithm used to compute the solution of the mean-fi eld equations for the general models GM1 and GMk is a straightforward generalization.

Analysis of the algorithm
Convergence rate. As proved in Theorem 4, given Z 0 a nondegenerate kP-dimensional Gaussian random variable and X a Gaussian process such that X(0) = Z 0 , the sequences of mean values and covariance functions computed theoretically converge uniformly toward those of the unique fi xed point of the map F k . It is clear that our algorithm converges uniformly toward the real function it emulates.
Hence for a fi nite N, the algorithm will converge uniformly toward the mean and covariance matrix of the process F k N X ( ). Denote by X f the fi xed point of F k in M 1 0 + ( ([ , ], )), C t T kP ‫ޒ‬ of mean μ X f t ( ) and covariance matrix C t s X f ( , ) and by F k N ( ) X the numerical approximation of F k N X ( ) computed using the algorithm previously described, whose mean is noted μ F k N X t ( ) ( ) and whose covariance matrix is noted C t s k N X F ( ) ( , ). The uniform error between the simulated mean after N iterations with a time step dt and the fi xed point's mean and covariance is the sum of the numerical error of the algorithm and the distance between the simulated process and the fi xed point, is controlled by: where k max = max(k, k) and k and k) are the constants that appear in the proof of Proposition 3 for the mean and covariance functions, and R N (x) is the exponential remainder, i.e., The discretization algorithm used converges in O(dt). Let us denote by C 1 the convergence constant, which depends on the sharpness of the function we approximate, which can be uniformly controlled over the iterations. Iterating the numerical algorithm has the effect of propagating the errors. Using these simple remarks we can bound the fi rst term of the righthand side of Eq. 55, i.e., the approximation error at the Nth iteration: Because the sequence of mean values is a Cauchy sequence, we can also bound the second term of the righthand side of Eq. 55: for some positive constant k introduced in the proof of Proposition 3. The remainders sequence (R n (k)) n ≥ 0 converges fast toward 0 (an estimation of its convergence can be obtained using the fact that lim sup ( ) ! / k k k →∞ = 1 1 0 by Stirling's formula). Hence we have: For the covariance, the principle of the approximation is exactly the same: The second term of the righthand side can be controlled using the same evaluation by R k N ( ) where k is the constant introduced in the proof of Proposition 3, and the fi rst term is controlled by the rate of convergence of the approximation of the double integral, which is bounded by C 2 (N + T) dt where C 2 depends on the parameters of the system and the discretization algorithm used. Hence we have: The expressions (Eqs. 56 and 57) are the sum of two terms, one of which is increasing with N and T and decreasing with dt and the other one decreasing in N. If we want to obtain an estimation with an error bounded by some ε > 0, we can for instance fi x N such that max( ( ), ( )) R k R k N N < ε 2 and then fi x the time step dt smaller than min( ), Complexity. The complexity of the algorithm depends on the complexity of the computations of the integrals. The algorithm described hence has the complexity O N T dt ( ( ) ) 2 .

THE IMPORTANCE OF THE COVARIANCE: SIMPLE MODEL, ONE POPULATION
As a fi rst example and a benchmark for our numerical scheme we revisit the work of Sompolinsky et al. (1998). These authors studied the case of the simple model with one population (P = 1), with the centered sigmoidal function S(x) = tanh(gx), centered connectivity weights J = 0 of standard deviation σ = 1 and no input (I = 0; Λ = 0). Note therefore that there is no "noise" in the system, which therefore does not match the nondegeneracy conditions of Proposition 2 and of Theorem 4. This issue is discussed below. In this case, the mean equals 0 for all t. Nevertheless, the Gaussian process is nontrivial as revealed by the study of the covariance C(t, s).

Stationary solutions
Assuming that the solution of the mean-fi eld equation is a stationary solution with C t s C t s c ( , ) ( ) ( ) ≡ − = τ , Sompolinsky and his collaborators found that the covariance obeyed a second-order differential equation: This form corresponds to the motion of a particle in a potential well and it is easy to draw the phase portrait of the corresponding dynamical system. However, there is a diffi culty. The potential V q depends on a parameter q which is in fact precisely the covariance at τ = 0 (q = C(0)). In the stationary case, this covariance depends on the whole solution, and hence cannot be really considered as a parameter of the system. This is one of the main diffi culties in this approach: mean-fi eld equations in the stationary regime are self-consistent.
Nevertheless, the study of the shape of V q , considering q as a free parameter gives us some information. Indeed, V q has the following Taylor expansion (V q is even because S is odd): where λ = − 〈 ′〉 ( ) 1 2 2 2 g J S q and γ = 〈 〉 1 6 2 2 3 2 J g S q ( ) ), 〈φ〉 q being the average value of φ under the Gaussian distribution with mean 0 and variance q = C(0).
If λ > 0, i.e., when g J S q 2 2 2 1 〈 ′〉 < ) , then the dynamical system (Eq. 58) has a unique solution C(t) = 0, ∀t ≥ 0. This corresponds to a stable fi xed point (i.e., a deterministic trajectory, μ = 0 with no fl uctuations) for the neural network dynamics. On the other hand, if g J S q 2 2 2 1 〈 ′〉 ≥ ) , there is a homoclinic trajectory in Eq. 58 connecting the point q = C* > 0 where V q vanishes to the point C = 0. This solution is interpreted by the authors as a chaotic solution in the neural network. A stability analysis shows that this is the only stable 7 stationary solution (Sompolinsky et al., 1998).
The equation for the homoclinic solution is easily found using energy conservation and the fact that V q (q) = 0 and dV dC q ( ) q = 0. One fi nds: At the fourth-order in the Taylor expansion of V q this gives Though λ depends on q it can be used as a free parameter for interpolating the curve of C(τ) obtained from numerical data.

Numerical experiments
This case is a good benchmark for our numerical procedure since we know analytically the solutions we are searching for. We expect to fi nd two regimes. In one case the correlation function is identically 0 in the stationary regime, for suffi ciently small g values or for a suffi ciently small q (trivial case). The other case corresponds to a regime where C(τ) > 0 and C(τ) → 0 has τ → + ∞ ("chaotic" case). This regime requires that g be suffi ciently large and that q be large too. We took τ α = 0:25, σ αα = 1. For these values, the change in dynamics predicted by Sompolinsky and collaborators is g c = 4.
In Sections "Existence and Uniqueness of Solutions in Finite Time" and "Existence and Uniqueness of Stationary Solutions" we have introduced the assumption of nondegeneracy of the noise, in order to ensure that the mean-fi eld process was nondegenerate. However, in the present example, there is no external noise in the evolution, so we can observe the effects of relaxing this hypothesis in a situation where the results of Proposition 2 and of Theorem 4 cannot be applied. First, we observed numerically that, without external noise, the process could become degenerate [namely some eigenvalues of the covariance matrix C α (t, s) become very small and even vanish.]. This has also an incidence on the convergence of the method which presents numerical instabilities, though the iterations leads to a curve which is well fi tted by the theoretical results of Sompolinsky et al. (see Figure 3). The instability essentially disappears if one adds a small noise. But, note that in this case, the solution does not match with Sompolinsky et al. theoretical calculation (see Figure 3).
Modulo this remark, we have fi rst considered the trivial case corresponding to small g values. We took g = 0.5 and T = 5. We choose as initial process the stationary Ornstein-Uhlenbeck process corresponding to the uncoupled system with Λ = 0.1. We drew μ α (0) randomly from the uniform distribution in [−1, 1] and v α (0) randomly from the uniform distribution in [0, 1]. Starting from this initial stationary process, we iterated the function F 1 . Then, during the iterations, we set s α = 0 in order to match the conditions imposed by Sompolinsky and colleagues. We observe that the method converges toward the expected solution: the mean function converges to 0, while the variance v(t) decreases exponentially fast in time toward a constant value corresponding to the stationary regime. This asymptotic value decreases between two consecutive iterations, which is consistent with the theoretical expectation that v(t) = 0 in the stationary regime of the trivial case. Finally, we observe that the covariance C(t − s, s) stabilizes to a curve that does not depend on s and the stationary value (large t − s) converges to 0.
We applied the same procedure for g = 5 corresponding to the "chaotic" regime. The behavior was the same for μ(t) but was quite different for the covariance function C(t, s). Indeed, while in the fi rst case the stationary value of v(t) tends to 0 with the number of iterations, in the chaotic case it stabilizes to a fi nite value. In the same way, the covariance C(t − s, s) stabilizes to a curve that does not depend on s. The shape of this curve can be extrapolated thanks to Sompolinsky et al. results. We observe a very good agreement with the theoretical predictions with a fi t f x where ρ, K, λ are explicit functions of a, b, c, we obtain a slightly better approximation.

MEAN-FIELD EQUATIONS FOR TWO POPULATIONS WITH A NEGATIVE FEEDBACK LOOP
Let us now present a case where the fl uctuations of the Gaussian fi eld act on the dynamics of μ α (t) in a nontrivial way, with a behavior strongly departing from the naive mean-fi eld picture. We consider two interacting populations where the connectivity weights are Gaussian random variables J J αβ ≡ = ( , ) N αβ αβ σ 1 for (α, β) ∈ {1, 2} 2 . We set S β (x) = tanh(gx) and I α = 0, s α = 0, α = 1, 2.

Theoretical framework
The dynamic mean-fi eld equation for μ α (t) is given, in differential form, by: Let us denote by G α (μ, v(t)) the function in the righthand side of the equality. Since S is odd, ∫ = −∞ ∞ S v t x Dx ( ( ) ) β 0. Therefore, we have G α (0, v(t)) = 0 whatever v(t), and hence the point μ 1 = 0, μ 2 = 0 is always a fi xed point of this equation.
Let us study the stability of this fi xed point. To this purpose, we compute the partial derivatives of G α (μ, v(t)) with respect to μ β for (α, β) ∈ {1, 2} 2 . We have: , 1 2 and hence at the point μ 1 = 0, μ 2 = 0, these derivatives read: In the case v α (0) = 0, J = 0, s α = 0, implying v α (t) = 0, t ≥ 0, the equation for μ α reduces to: For the sake of simplicity we assume that the two time constants τ α are equal and we denote this value τ. The eigenvalues are in this case − + The corresponding bifurcation is a Hopf bifurcation. The situation is different if one takes into account the fl uctuations of the Gaussian fi eld. Indeed, in this case the stability of the fi xed point μ = 0 depends on v(t). More precisely, the real and imaginary part of the eigenvalues of DG (0, v(t)) depend on v(t). Therefore, the variations of v(t) act on the stability and oscillations period of v(t). Though the evolution of μ(t), v(t) are coupled we cannot consider this evolution as a coupled dynamical system, since v(t) = C(t, t) is determined by the mean-fi eld equation for C(t, s) which cannot be written as an ordinary differential equation. Note that we cannot assume stationarity here, as in the previous case, since μ(t) depends on time for suffi ciently large g. This opens up the possibility of having complex dynamical regimes when g is large.

Numerical experiments
We have considered the case J J 11 22 5 01 = = = ,τ . giving a Hopf bifurcation for g c = 2 when J = 0 (Figure 4). The trajectory of μ 1 (t) and v 1 (t) is represented in Figure 4 in the case g = 3. When J = 0, μ 1 (t) presents regular oscillations (with non-linear effects since g = 3 is larger than the critical value for the Hopf bifurcation, g c = 2). In this case, the solution v 1 (t) = 0 is stable as seen on the fi gure. When J ≠ 0 the Gaussian fi eld has (small) fl uctuations which nevertheless strongly interact with the dynamics of μ 1 (t), leading to a regime where μ 1 (t) and v 1 (t) oscillate periodically.

DISCUSSION
The problem of bridging scales is overwhelming in general when studying complex systems and in particular in neuroscience. After many others we looked at this diffi cult problem from the theoretical and numerical viewpoints, hoping to get closer to its solution from relatively simple and physically/biologically plausible fi rst principles and assumptions. One of our motivations is to better understand such phenomenological neural mass models as that of Jansen and Rit (1995).
We consider several populations of neurons and start from a microscopic, i.e., individual, description of the dynamics of the membrane potential of each neuron that contains four terms.
The fi rst one controls the intrinsic dynamics of the neuron. It is linear in this article but this assumption is not essential and could probably be safely removed if needed.
The second term is a stochastic input current, correlated or uncorrelated. The corresponding noise model is very rich, depending on the degree k of smoothness of the g-shapes. It features integrated Brownian noise up to order k − 1.
The third term is a deterministic input current, and the fourth one describes the interaction between the neurons through random connectivity coeffi cients that weigh the contributions of other neurons through a set of functions that are applied to their membranes potentials. The only hypothesis on these functions is that they are smooth and bounded, as well as their fi rst-order derivative. The obvious choice of sigmoids is motivated by standard rate models ideas. Another appealing choice is a smooth approximation to a Dirac delta function thereby opening a window on the world of spiking neurons. Thus, the model presented in this paper is more general than the instantaneous rate model that is underlying Ermentrout's voltage-based model (Ermentrout, 1998) even though we have not explored this avenue.
We then derive the mean-fi eld equations and provide a constructive and new proof, under some mild assumptions, of the existence and uniqueness of a solution of these equations over fi nite and infi nite time intervals. The key idea is to look at this mean-fi eld description as a global problem on the probability distribution of the membranes potentials, unlike previous studies. Our proof provides an effi cient way of computing this solution and our numerical experiments show a good agreement with previous studies. It is interesting to note that a suffi cient condition for the convergence of our algorithm is related to the previously mentioned noise model. We prove that if the noise matrix F is full rank, with bounded eigenvalues, then the algorithm is in general convergent. An important fact to note is that the solutions of the mean-fi eld equations that we construct are fundamentally non-Markovian eliminating the need for such approximations as the introduction of the q parameter summarizing the whole history of the non-Markovian process, see below.
In the case where the nonlinearities are chosen to be sigmoidal our results shed a new light on existing neural mass models. Indeed, as shown in Section "General Derivation of the Mean-Field Equation", these appear as approximations of the mean-fi eld equations where the intricate but fundamental coupling between the time variations of the mean membrane potentials and their fl uctuations, as represented by the covariance functions, is neglected.
An alternative approach has been recently proposed by Chizhov and collaborators 8 (Chizhov and Graham, 2007;. The approach of these authors consists in reducing the large number, N, of degrees of freedom of the neural assembly by constructing a probability density ρ on the phase space of neurons states in the limit N → ∞. This is a non-rigorous approach where the evolution equations for ρ are heuristically derived. Especially, it is assumed that ρ depends on two parameters only: the current time t and the time elapsed since the last spike t*. Under these v(t), J=0 v(t) x 100, J=2, n=100 FIGURE 4 | Evolution of the mean µ 1 (t) and variance V 1 (t) for the meanfi eld of population 1, for J = 0 and 2, over a time window [0, 20]. n is the number of iterations of F 1 defi ned in Section "Existence and Uniqueness of Solutions in Finite Time" . This corresponds to a number of iterations for which the method has essentially converged (up to some precision). Note that V 1 (t) has been magnifi ed by a factor of 100. Though Gaussian fl uctuations are small, they have a strong infl uence on μ 1 (t). assumptions the initial phase space of neurons states is mapped to a two dimensional space t, t*, while ρ(t, t*)dt characterizes the fraction of neurons which have fi red in the time interval [t − t*, t − t* + dt]. Therefore, this approach intrinsically holds for integrate and fi re neurons models where the neuron's membrane potential history is summarized by the last spike time, when it is reset to a constant value. As noticed by these authors, this allows to circumvent the main problem in mean-fi eld approaches for fi ring rate models, that we also discuss in the present paper: When using mean-fi eld theory to characterize stationary regimes, one needs to introduce ad hoc parameters (see e.g., the parameter q introduced in Section "Stationary Solutions") summarizing the whole history of the non-Markovian process. Introducing a "history cut-off " while resetting the membrane potential to a constant value indeed removes this diffi culty. Therefore, it might be interesting to compare our approach in the case of integrate-and-fi re models (see above remark on the choice of the nonlinearity), to the approach of Chizhov and collaborators. This could provide some rigorous basis for their analysis and allow to elucidate the role of fi eld fl uctuations which does not appear explicitly in the probability density approach.

CONCLUSION AND FURTHER WORK
On more general grounds, our goal is now to extend the present work in several directions.

BIFURCATIONS ANALYSIS OF THE DYNAMIC MEAN-FIELD EQUATIONS
From the present analysis, and as shown in the simple examples of Section "Numerical Experiments", the mesoscopic dynamics of the average membrane potential of a neurons population can be really different from the classical phenomenological equations la Jansen-Rit if one includes the non-Markovian fl uctuations of the interaction fi elds, which summarize the cumulative effects of the nonlinear interactions of a given neuron with the bulk of other neurons. Jansen-Rit equations are commonly used in the neuroscience community either to anticipate the dynamics of local fi eld potential in relation with imaging (Optical Imaging, MEGEEG), or to understand neurophysiological disorders such as epilepsy. Bifurcations analysis of these equations reveal dynamical regimes that can be related to experiments (Grimbert and Faugeras, 2006). They can be generalized using more accurate neural models (Wendling et al., 2005). Is there any need to generalize these equations, that we claim to be incomplete, while people commonly use them with some satisfaction? Are there new phenomena, experimentally accessible, that can be exhibited by the generalized mean-fi eld equations and that do not appear in the naive ones? These are obviously important questions that we intend to address in the near future. On mathematical grounds, the goal is to make a bifurcation analysis of the map F on the space of trajectories, introduced in the present paper. Do any new salient dynamical regimes appear? If such regimes exist, the goal will be, on experimental grounds, to interact with experimentalists in order to see in which conditions such a regime can be exhibited, and what are its implications on cortical columns dynamics or function.

INVESTIGATIONS OF NONSTATIONARY REGIMES
As discussed in this paper, and as is well-known in the physicists' community (especially spin-glasses community), the dynamic mean-fi eld approach raises serious diffi culties as far as one is trying to describe stationary dynamics. On technical grounds, this relies on the non-commutativity of the two limits N → ∞ and t → ∞ already discussed in Sompolinsky and Zippelius (1982). As a result, one is led to introduce ad hoc phenomenological parameters, depending on initial conditions, that can be determined in statistical physics models where the distribution of equilibria is known (Gibbs distribution), using sophisticated techniques such as the replica "trick" (Houghton et al., 1983). For spin-glasses it is only in the high temperature regime that a simple solution to this problem is known. This restriction also appears in the present paper, where the existence and uniqueness of a stationary solution is proved only for low values of the gain parameter g (which plays a role similar to the inverse temperature). However, we are not so much interested in stationary dynamics, since brain processes are ultimately nonstationary. Our approach, valid for any fi nite time T, opens up the possibility to characterize mean-fi eld equations in transient regimes, with an analysis strategy that can moreover be easily implemented. To the best of our knowledge, this type of techniques has never been used in the statistical physics community, where iterations on space trajectories are not in the standard toolbox. Therefore, our work could allow the (numerical) investigation of cortical columns submitted to nonstationary inputs, with strong implications on neuroscience.

EXTENSION TO A LARGER CLASS OF MODELS
A very challenging question is the application of this theory to spiking neurons models. We have briefl y mentioned in Section "Discussion" that this may be possible through the use of nonsigmoidal functions in the interaction terms. This idea could be applied to the analysis of Integrate and Fire models with conductance based synapses, which constitute good models of spiking neurons. As discussed at the end of Section "Discussion", the analysis of the mean-fi eld equations could be simplifi ed by the fact that memory is reset after a neuron fi res. There is however a need to characterize parameter space regions where neurons can take an arbitrary large time to fi re for the fi rst time (Cessac, 2008;Cessac and Viéville, 2008). This is the main obstruction in the application of our theory to this type of models.

A. IDENTIFICATION OF THE MEAN-FIELD EQUATIONS
Ben-Arous and Guionnet studied from a mathematical point of view the problem of fi nding a mean-fi eld description of large networks of spin-glasses. They obtained using different methods of stochastic analysis a weak limit of the law of a given spin and proved their independence. Our equations do not directly fi t in their study: indeed, the spin intrinsic dynamics is nonlinear while the interaction is linear, and everything in done in dimension one. Nevertheless, their proof extends to our case which is somehow more simple. For instance in the case of the Simple Model with one population, we can readily adapt their proof in our case. More precisely, let P = 1, the equation of the network reads: In this case, we defi ne for X C t T ∈ + M 1 ( ([ , ], )) 0 ‫ޒ‬ the effective interaction term ( ) U t X which is the effective interaction process defi ned in Section "The Mean-Field Equations", i.e., the Gaussian process of mean J S X t αβ ‫[ޅ‬ ( )] and of covariance: Cov( , )=: [ ( ) ( )].
2 U U S X S X t X s X t s σ αβ ‫ޅ‬ Let us note P the law of the membrane potential when there is no interaction (it is an Ornstein-Uhlenbeck process), and the empirical measure ˆ. V N N i N V i = ∑ = 1 1 δ We can prove that under the probability distribution averaged over the connectivities, see below, the empirical measure satisfi es a large deviation principle with good rate function H defi ned as in Guionnet (1997). Using this large deviation result, we can prove annealed and quenched tightness of the empirical measure, and fi nally its convergence toward the unique process where the good rate function H achieves its unique minimum, which is defi ned by the property of having a density with respect to P and whose density satisfi es the implicit equation: where ε denotes the expectation over the effective interaction process U Q .
We can also prove following the steps of Ben-Arous and Guionnet (1997) that there exists a unique solution to this equation, and that this solution satisfi es the nonlinear non-Markovian stochastic differential equation: which can also be written as our mean-fi eld equation, averaged on the connectivities (see Ben-Arous and Guionnet, 1995). More precisely, let L V be the law of the solution of the equation: which is exactly Eq. 33. They prove that V satisfi es the nonlinear equation: This result is probably extendable to the multi-population case using the multidimensional Girsanov's theorem, but the corresponding mathematical developments are out of the scope of this paper.

B. THE RESOLVENT
In this appendix we introduce and give some useful properties of the resolvent Φ L of a homogeneous differential equation:  (62) where Id P is the P × P identity matrix.
Proposition B.1. The resolvent satisfi es the following properties: (i) Φ L (t + s, t 0 ) = Φ L (t + s, t) · Φ L (t, t 0 ) (ii) Φ L (t, t 0 ) is invertible of inverse Φ L (t 0 , t) which satisfi es: Proof. Pathwise (strong) uniqueness of solution directly comes from the results on the SDE with Lipschitz coeffi cients (see e.g., Karatzas and Shreve, 1991, Theorem 2.5 of Chapter 5). It is clear that X t 0 = X 0 . We use Itô's formula for the product of two stochastic processes to prove that the process Eq. 67 is solution of Eq. 66:

t t d t + F( ) W
Hence the theorem is proved.

C. MATRIX NORMS
In this section we recall some defi nitions on matrix and vector norms. Let M n × n be the set of n × n real matrices. It is a vector space of dimension n 2 and the usual L p norms 1 ≤ p ≤ ∞ can be defi ned. Given L ∈ M n × n , we note L p v the corresponding norm. Given a vector norm, noted , on ‫ޒ‬ n the induced norm, noted , on M n × n is defi ned as Since M n×n is fi nite dimensional all norms are equivalent. In this article we use the following norms . This so-called spectral norm is equal to the square root of the largest singular value of L which is the largest eigenvalue of the positive matrix L T L. If L is positive defi nite this is its largest eigenvalue which is also called its spectral radius, noted ρ(L). Table 1 summarizes some notations which are introduced in the article and used in several places. where μ and σ max are defi ned in Lemma 1, λ Γ max is defi ned in Assumption 1.

D. IMPORTANT CONSTANTS
Proof. The fi rst inequality follows from taking the infi nite norm of both sides of Eq. 41 and using Assumption (a) in 1 and Eq. 64, Lemma 1, and Assumption (c) in 1.
The second inequality follows from taking the infi nite norm of both sides of Eq. 42 and using Assumption (a) in 1 and Eqs. 64 and 65, Lemma 1, and Assumption (b) in 1.  Proof. We use Eq. 42 which we rewrite as follows, using the group property of the resolvent Φ L :