Complex Dynamics of Noise-Perturbed Excitatory-Inhibitory Neural Networks With Intra-Correlative and Inter-Independent Connections

Real neural system usually contains two types of neurons, i.e., excitatory neurons and inhibitory ones. Analytical and numerical interpretation of dynamics induced by different types of interactions among the neurons of two types is beneficial to understanding those physiological functions of the brain. Here, we articulate a model of noise-perturbed random neural networks containing both excitatory and inhibitory (E&I) populations. Particularly, both intra-correlatively and inter-independently connected neurons in two populations are taken into account, which is different from the most existing E&I models only considering the independently-connected neurons. By employing the typical mean-field theory, we obtain an equivalent system of two dimensions with an input of stationary Gaussian process. Investigating the stationary autocorrelation functions along the obtained system, we analytically find the parameters’ conditions under which the synchronized behaviors between the two populations are sufficiently emergent. Taking the maximal Lyapunov exponent as an index, we also find different critical values of the coupling strength coefficients for the chaotic excitatory neurons and for the chaotic inhibitory ones. Interestingly, we reveal that the noise is able to suppress chaotic dynamics of the random neural networks having neurons in two populations, while an appropriate amount of correlation coefficient in intra-coupling strengths can enhance chaos occurrence. Finally, we also detect a previously-reported phenomenon where the parameters region corresponds to neither linearly stable nor chaotic dynamics; however, the size of the region area crucially depends on the populations’ parameters.

Using the Martin-Siggia-Rose-de Dominicis-Janssen path integral formalism [9,14,31,37], the following result has been obtained in [48]. However, for the completeness of this article, we provide the detailed arguments here.
We first consider the equation in one dimension that is written as where N (t) is the stationary Gaussian process with mean zero and satisfying ⟨N (t)N (t ′ )⟩ = c(t, t ′ ). First, we perform the discretization for the above equation in the following manner: where ∆t = t i − t i−1 and t 0 =0. Let N i−1 := N (t i−1 ) and Because of the property of the Dirac delta function, we derive Take the inverse Fourier transformation form of the Dirac delta function, we get Hence, p(x 1 , x 2 , · · · , x M )= ρ(N 0 , N 1 , · · · , N M −1 ) Now, using the Cholesky decomposition for C yields C = BB ⊤ , so thatÑ = B −1 N . This gives Notice that Thus, the formula above becomes Now we introduce the source field l = (l(t), t ∈ R) and consider l = (l 1 , l 2 , · · · , l M ), where l i = l(t i ). Here, we study the characteristic function as follows: Letting ∆t → 0 makes the formula continuous, which immediately yields: Consequently, we derive the moment-generating functional as follows: where Particularly, as N (t) is supposed to be the white noise satisfying Moreover, for the system of higher-dimension which reads as where ξ(t) is the standard white noise, the moment generating functional becomes which is akin to the functional obtained above for the one-dimensional system.

SB. EQUIVALENT DYNAMIC EQUATION USING MEAN FIELD METHOD AND SADDLE-POINT APPROXIMATION
Notice that We thus average the functional Z[l](J ) with respect to J and obtain Now, we calculate the coupling strength between the neurons in (SB1) which encompasses J Ki,Lj . For the two neurons in the same population, we derive the term as where Then, we apply the Cholesky decomposition to A K , obtaining Additionally, for the two neurons in the different populations, we have that, as K ̸ = L, To calculate the right side of the above quantity explicitly, we notice that the number of the elements corresponding to the diagonals are in the order of N −1 , compared to the remaining terms in the above quantity. Hence, we obtain where the subscripts K and L, denoting the two populations, are omitted for simplicity and In what follows, we use the notations: , Then, we haveZ dtdt ′ and define the remaining terms in (SB4) in an analogous manner.
Next, we are to calculate Q, R K , and T K , more explicitly. To this end, from the property of the Dirac delta function, the quantity in (SB4) is rewritten as Then, we write the Dirac delta function in a form of the Fourier transformation as whereQ is an imaginary field and Then, we obtain where Since there is no physical meaning for the source field, we omit it in the following calculations. When N is sufficiently large, we apply the saddle point approximation toZ, which yields: Here, all the parameters with the star superscripts in (SB6) satisfy where δ represents the variation with respect to the corresponding quantity and For Q * , we get Denote, respectively, by where ⟨f ⟩ ω,E (resp., ⟨f ⟩ ω,I ) stands for the average value of f in a sense of the excitatory (resp., inhibitory) population in a large scale. Then, Asx is the imaginary field derived from the Fourier transformation of the Dirac delta function, we stipulate that its expectation is zero. This stipulation is physically reasonable, so thatQ * = 0. Similarly, we get In the following the subscript of the expectation ⟨·⟩ is omitted. Let When N is sufficiently large, we have which implies that Thus, together with the results obtained in (SA1) of Appendix SA, the formula above becomes the moment-generating functional for N E identical excitatory neurons and N I identical inhibitory neurons with the external Gaussian process. Correspondingly, the dynamical equations become: where ξ K (t) with K ∈ {E, I} are the mutually independent white noises and γ K (t) with K ∈ {E, I} are the Gaussian processes with mean zeros satisfying This therefore completes the derivation of the equation that we anticipate above.

SC. DIFFERENTIAL EQUATION FOR AUTOCORRELATION FUNCTION
Substitution of (4) into (3) gives Then, we have Through setting t ′ = t + τ in (SC1) and using an assumption that the neurons' states in the two populations are the stationary Gaussian processes, we obtain

SD. PROOFS OF PROPOSITIONS III.1 & III.2
Proof of Proposition III.1: Define W E,I in the manner as those defined in (10). Thus, we have Consider the step function as ε(t) = 1 2 , t ⩾ 0, − 1 2 , t < 0, whose derivative is the Dirac delta function, a generalized function. As C E,I is even from its definition, it is reasonable to assume that By virtue of Price's Theorem [45], we have Hence, with With an additional assumption that the autocorrelation functions tend towards a constant as τ goes to infinity, we have and which completes the analytical validation of this proposition.
(2) Actually, when one of the three conditions assumed in the proposition is satisfied, the two equations describing the dynamics of C E and C I in (SD1) are identical. Moreover, due to (9) and (11), we have c E0 = c I0 . Therefore, we conclude that the values of C E and C I are identical for all τ .

SE. EQUIVALENT DYNAMIC EQUATIONS FOR TWO DYNAMICS
For the moment-generating functional we average it with respect to J . Then, for any pair of two neurons in the same population, we calculate as Additionally, for any pair of two neurons from different populations, we calculate as Analogous to (SB5) computed in Appendix SB, we introduce the auxiliary fields and then obtain Proof. When one of the conditions assumed in Proposition IV is satisfied, the dynamical equations of G E and G I are the same so that G E (t, t ′ ) = G I (t, t ′ ). As d K (t) = −2ϵG K (t, t), the maximal Lyapunov exponents for the two populations are identical. Letting G K (t, t ′ ) = H K (t + t ′ , t − t ′ ) and H K (T, τ ) = e κT ψ K (τ ) leads to +N I (1 + δ KI η I )f ϕ ′ (·+⟨x I ⟩) (C I (τ ), c I0 )ψ I (τ ) = −(κ + 1) 2 ψ K (τ ).
From the definition of the maximal Lyapunov exponent, it follows that λ max = κ. Since G E = G I , we thus conclude that Together with V as defined in (7), C E = C I , V E = V I , and Price's Theorem, we obtain −∂ 2 τ ψ(τ ) + Y (τ )ψ(τ ) = 1 − (κ + 1) 2 ψ(τ ), where the subscripts are omitted for simplicity, Y (τ ) = −X ′′ (C(τ )), and X(C) = V (C, C). It is the form of the Schrödinger equation [34]. In light of the Sturm-Liouville theory, Eq. (SG1) possesses countable solutions satisfying ψ(∞) = 0. It can be easily verified from (8) and (SD1) that, |C ′ (τ )|, having no zero point, is a well-posed solution of Eq. (SG1). Here, the well-poseness of a solution is assured if C ′′ (0) = 0, and such a solution corresponds to the ground-state energy E 0 = 1 − (κ 0 + 1) 2 [39]. Also, it follows from the Node Theorem [17] that its associated eigenvalue is the maximal one of Eq. (SG1). In our case, this eigenvalue is uniquely attained as κ 0 = 0. As a consequence, chaotic behaviour occurs since the MLEs of both populations are zero. To guarantee the validity of C ′′ (0) = 0, the following equation needs to be satisfied Specifically, the critical point g K,c satisfies c K0 − g 2 K,c 1 + if we choose an odd transfer function (for instance, arctangent function used in this work) or if the means of the two populations vanish.