Discrete Dynamics of Dynamic Neural Fields

Large and small cortexes of the brain are known to contain vast amounts of neurons that interact with one another. They thus form a continuum of active neural networks whose dynamics are yet to be fully understood. One way to model these activities is to use dynamic neural fields which are mathematical models that approximately describe the behavior of these congregations of neurons. These models have been used in neuroinformatics, neuroscience, robotics, and network analysis to understand not only brain functions or brain diseases, but also learning and brain plasticity. In their theoretical forms, they are given as ordinary or partial differential equations with or without diffusion. Many of their mathematical properties are still under-studied. In this paper, we propose to analyze discrete versions dynamic neural fields based on nearly exact discretization schemes techniques. In particular, we will discuss conditions for the stability of nontrivial solutions of these models, based on various types of kernels and corresponding parameters. Monte Carlo simulations are given for illustration.


INTRODUCTION
To model a large amount of cells randomly placed together with a uniform volume density and capable of supporting various simple forms of activity, including plane waves, spherical and circular waves, and vortex effects, Beurle (1956) proposed a continuous model consisting of a partial differential equation, that can capture some neurons behaviors like excitability at the cortical level. Amari (1977) and Wilson and Cowan (1972) subsequent modifications did allow for new features such as inhibition. This essentially gave birth to the theory of dynamic neural fields (DNFs) with far reaching applications. The literature on the DNFs is rather rich, and the trends nowadays are toward cognition, plasticity, and robotics. In cognitive neuroscience for instance, DNFs have enabled analyses of Electroencephalograms (Nunez and Srinivasan, 2006), short term memory (Camperi and Wang, 1998), visual hallucinations (Ermentrout and Cowan, 1979;Tass, 1995), visual memory (Schmidt et al., 2002). Durstewitz et al. (2000) have proposed neurocomputational models of working memory using single layer DNFs and shown that the temporal dynamics they obtained compare well with the in vivo observations. Simmering et al. (2008) and Perone and Simmering (2019) used DNFs to achieve spatial cognition by successfully simulating infants' performance in some tasks. Quinton and Goffart (2017) used nonlinearities in DNFs to propose a unifying model for goal directed eye-movements that allow for the implementation of qualitatively different mechanisms including memory formation and sensorimotor coupling. Plasticity in neuroscience can be thought of as the ability of the brain to adapt to external activities by modifying some of its structure. Connections between plasticity and DNFs have been made in studies on intrinsic plasticity, see Strub et al. (2017) or parallel works such as Neumann and Steil (2011), Pozo and Goda (2010), and the references therein. Robotics has been a great niche for DNFs with the works of Bicho et al. (2000), Erlhagen and Schöner (2001), Erlhagen and Bicho (2006), and Bicho et al. (2010).
To recall the theoretical aspects of DNFs, let ⊆ R d be a manifold where d is positive integer. In the presence of neurons located on at time t arranged on L layers, the average potential function V k (x k , t) is often used to understand the continuous field on the kth layer. V k (x k , t) is the average membrane potential of the neuron located at position x k at time t of the kth layer. When L = 1, V(x k , t) can also be understood as the synaptic input or activation at time t of a neuron at position or direction x k . It satisfies the equation (see Amari, 1977) which is given as where W kl (x k , x ℓ ) is the intensity of the connection between a neuron located at position x k on the kth layer with a neuron a position x ℓ on the ℓth layer, G[V ℓ (x ℓ , t)] is the pulse emission rate (or activity) at time t of the neuron located at position x ℓ on the ℓth layer. G is often chosen as a monotone increasing function. S k (x k , t) represents the intensity of the external stimulus at time t arriving on the neuron at position x k on the kth layer, see Figure 1 below. Most of the mentions of discrete systems involving DNFs are of Euler forward type discretization-see Erlhagen and Bicho (2006), Quinton and Goffart (2017), and Beim and Hutt (2014)with simulations as the main objective. Stability analysis of solutions of DNFs in the sense of discrete dynamical systems are either understudied or often ignored. Beim and Hutt (2014) studied an heterogeneous aspect of DNFs and found existence of attractors and saddle nodes for solutions of (1). There are also discussion of DNFs as neural networks. For instance, Sussilo and Barack (2013) discussed continuous time recurrent neural network (RNN) as differential equation of the form where V k is an M dimensional state of activations and G(V j ) are the firing rates, and η k = I j=1 B kj u j represents the external stimulus received from I inputs u j and synaptic weights B kj . The weights L kj are selected from a normal distribution. In a sense, the Equation (1) can be considered a continuous time (recurrent neural networks) RNN. Authors such as Elman (1990) and Williams and Zipser (1990), and most recently, Durstewitz (2017)-see Equation (9.8) therein, have discussed discrete time RNN formulated as a difference equation of the form where G is a monotone increasing function, W i0 is a basic activity level for the neuron at position x i , the W ij 's are weights representing the connection strength between neuron at position x i and neuron at position x j , and η (i) n is some external stimulus at time n arriving on the neuron at position x i . So essentially, after discretization, (1) may yield not only a discrete time dynamical system, but also a RNN. In fact, Jin et al. (2021) considered a discrete equivalent in the space for Equation (1) for L = 2 while maintaining the time as continuous and discussed it in the context of unsupervised learning which could help understand plasticity in the brain. The two main goals of this paper are: 1. to propose a discrete dynamic neural field model that is both numerically convergent and dynamically consistent. The model is based on nearly exact discretization techniques proposed in Kwessi et al. (2018) that ensure that the discrete system obtained preserves the dynamics of the continuous system it originated from. 2. to propose a rigorous and holistic theoretical framework to study and understand the stability of dynamic neural fields.
The model we propose has the advantage of being general and simple enough to implement. It also captures different generalizations of DNFs such as predictive neural fields (PNFs) and active neural fields (ANFs). We further show that the proposed model is in effect a Mann-type (see Mann, 1953) local iterative process which admits weak-type solutions under appropriate conditions. The model can also be written as a recurrent neural network. Stability analysis shows that in some case, the model proposed can be studied using graph theory.
The remainder of the paper is organized as follows: in section 2, we propose our nearly exact discretization scheme for DNFs. In section 3.1, we make a brief overview of the essential notions for the stability analysis of discrete dynamical systems, in section 3.2, we discuss the existence on nontrivial fixed solutions for DDNFs, in sections 3.3 and 3.4, we discuss stability analysis for a one and multiple layers discrete DNFs. In section 3.5, we propose simulations and their interpretation for neural activity and in section 4, we make our concluding remarks.

Discrete Schemes for DNFs
For sake of simplification, we will use the following notations: given a positive integer L, and integers 1 ≤ k, ℓ ≤ L, M k , M ℓ , we where h n = t n+1 − t n , and φ(h n ) = 1 − e −h n is the time scale of the field. We obviously assume that all the layers of the domain have the same time scale. | | represents the size of the field that henceforth we will assume to be finite. x i represents a point on the kth layer and y j a point on the ℓth layer. M k represents the number of interconnected neurons on the kth layer of the domain , and β k = | | M k represents the spatial discretization step on the kth layer of the domain . When L = 1, we will adopt the notations V (1) i,n = V i,n , S (1) i,n : = S i,n and W (1,1) ij : = W ij . Henceforth, for sake of clarity and when needed, we will represent vectors or matrices in bold.
To obtain a discrete model for DNFs, we are going to use nearly exact discretization schemes (NEDS) (see Kwessi et al., 2018). We note that according to this reference, Equation (1) is of type T 1 , therefore, the left-hand-side will be discretized as For the right-hand-side, we write a Riemann sum for the integral in Equation (1), over the domain as This means that, for a given L ∈ N, for integers 1 ≤ k ≤ L, M k , and 1 ≤ i k ≤ M k , a discrete dynamic neural field (DDNFs) model can be written as Staying within the confines of the layers' architecture proposed by Amari (1977), in a DDNFs with L ≥ 3, we observe that the activity on neurons located on the bottom (or top) layer depends on the interconnected neurons on that layer and their connections to the layer above (or below). First, we define Middle layers are connected to two layers, one above and one below. This means that at time t n , a neuron at position x i on the first layer receives external input S (1) i,n , intra-layer signals W (1,1) This means that at time t = t n , a neuron at position x i on the kth layer receives external input S (k) i,n , inter-layer signals W (k−1,k) ij from M k−1 neurons on the (k − 1)th layer, intra-layer signals W (k,k) ij from M k neurons on the kth layer, and inter-layer signals W (k,k+1) ij from M k+1 neurons on the (k + 1)th layer. Further, Equation (5) has a matrix notation which helps with the study of stability analysis. Indeed, let M = max 1≤k≤L M k and d = L × M, let X = (X 1 , X 2 , · · · , X L ) ∈ , where for 1 ≤ k ≤ L, X k = (x k1 , x k2 , · · · , x kM ) represents the positions of M k neurons on the kth layer. We are assuming without loss of generality that x ki = 0 if M k < i ≤ M to have a full L × M matrix. Now consider the matrix In this case, Equation (5) can be written in a matrix form as V n+1 (X) = F(V n (X)) = f 1 (V n (X)), f 2 (V n (X)), · · · , f L (V n (X)) , (6) with an element-wise notation for 1 ≤ i ≤ M and for 1 ≤ k ≤ L. For more complex layers' architectures, the interested reader can refer to De Domenico et al. (2013) and the references therein, where tensor products are used instead of matrices. Amari (1977) proved the existence of nontrivial solutions for DNFs when G is the Heaviside function. Further studies have shown that in fact this is true for a large classes of functions G. As we stated earlier, functions G satisfying the Hammerstein conditions G(v) ≤ µ 1 v + µ 2 are good candidates. However most practitioners use either the Heaviside function or the sigmoid function. For some θ > 0 and v 0 ≥ 0, the sigmoid function, is defined as Figure 2 below. Here I A is the indicator function and is given as

Pulse Emission Rate Functions
This shows that the assumption that G be nonnegative and increasing is not unrealistic. In fact we can transform many functions to bear similar properties of the Heaviside and sigmoid functions, see for instance Kwessi and Edwards (2020).
FIGURE 2 | An illustration of the pulse emission rate functions G 1 (v) and G 2 (v) for v 0 = 0. REMARK 1. We observe that the choice of the sigmoid activation function G 1 (v) = (1 + e −θ (v−v 0 ) ) −1 is widely preferred in the literature for its bounded nature. Another reason is the fact that it is also suitable when the X (i) n are binary, that is, they may take values 0, 1, where 0 and 1 represents, respectively active and non active neurons at time n. In this case, G 1 = 1 would represent the probability that there is an activity on neuron at position x i at time n + 1.

Connection Intensity Functions
There are a variety of connection intensities functions (or kernels) that one can choose from. This include a Gaussian kernel Note that as defined, the hyperbolic tangent kernel is not symmetric, however, for a suitable choice of β and r, one may obtain a nearly symmetric kernel. Indeed, if r < 0 and β is close to zero, we have Thyp(x, y, β, r) ≈ (1 + tanh(r))e −β x−y 2 , see for instance Lin and Lin (2003). This kernel is very useful in non-convex problems and support vector machines. In practice however, it is very common to select the function W(x, y) as the difference between either of the functions above, which results in a "Mexican hat" shaped function, that is, either Figure 3 below is an illustration of these kernels for selected parameters. This choice will ensure that neurons within the same cell assembly are connected reciprocally by high synaptic weights σ + , whereas neurons not belonging to the same assembly are connected by low synaptic weights σ − (see for instance Durstewitz et al., 2000;Quinton and Goffart, 2017;Wijeakumar et al., 2017;Jin et al., 2021).

Stability Analysis
We recall the essential notions for the stability analysis of a discrete dynamical system (iii) The spectral radius ρ(A) a matrix A is a maximum of its absolute eigenvalues. (iv) A fixed point V * for F is said to be stable if there exists a neighborhood of V * whose trajectories are arbitrarily close to Frontiers in Computational Neuroscience | www.frontiersin.org (v) A fixed point V * for F is said to be attracting if there exists a neighborhood of V * whose trajectories converge to V * , that is, (vi) A fixed point V * for F is said to be asymptotically stable if it is both stable and attracting. (v) A fixed point V * for F is said to be unstable if it is not stable. THEOREM 3. Let V n+1 = F(V n ) be a discrete dynamical system with a fixed point V * and let A = JF(V * ) be its Jacobian matrix.

Existence of Nontrivial Fixed Solutions for the DDNFs Model
A nontrivial solution for the DNFs is a non constant solution for which the left-hand side of Equation (1) is equal to zero. For the DDNFs, it can be formulated in the following way: for a given x i on the kth layer, the DDNFs in (5) has a nontrivial fixed point i, * . This is satisfied when The right-hand side of Equation (7) is a Riemann sum for So the existence of a nontrivial fixed point for the continuous DNFs Equation (1) implies the existence of a non trivial fixed solution for the DDNFs. According to Hammerstein (1930), such a nontrivial solution exists if W(x, y) is symmetric positive definite and G satisfies G(v) ≤ µ 1 v+µ 2 , where 0 < µ 1 < 1, µ 2 > 0 are constants. We will use a different approach to prove existence of nontrivial solution of DNFs. Let (S, A, µ) be a measure space. We will consider the space L 1 (S) of equivalent classes of integrable functions on S, endowed with the norm

DEFINITION 5.
A density function f is an element of L 1 (S) such that f : S → R such that S fdµ = 1.

DEFINITION 6.
A Markov operator P on L 1 (S) is a positive linear operator that preserves the norm, that is,

REMARK 7.
We note that f ≥ 0 (and Pf ≥ 0) represents equivalent classes of integrable functions on S that are greater or equal to 0 except on a set of measure zero and thus the inequality should be understood almost everywhere. We also note that when an operator P satisfies the second condition above, it is sometimes referred to as a non-expansive operator.
THEOREM 8. (Corollary 5.2.1- Lasota and Mackey, 2004). Let S ≡ (S, A, µ) be measure space and P : L 1 (S) → L 1 (S) be a Markov operator. If for some density function f , there is g ∈ L 1 (S) such that P n f ≤ g, for all n, then there is a density f * such that Pf * = f * . Now we use Definitions 4, 5, and 6 and Theorem 8 above to state our result on the existence of nontrivial fixed solution of DNFs. Consider the linear operator P defined on S as Clearly, P is a Markov operator and we have ∀n ≥ 1, P n f (x) = S W n (x, y)f (y)dy , with W 1 (x, y) = W(x, y) and W n+1 (x, y) = S W(x, z)W n (z, y)dz for n ≥ 1. Moreover, by the positivity and boundedness of W, there exists M > 0 such that for a density function f , we have P n f (x) ≤ M, ∀n ≥ 1. By the above Theorem, there exists a density f * such that Pf * = f * . We conclude by observing that f * (x) = G(V * (x)) = V * (x) for some function V * (x) defined on and thus a nontrivial solution for the DNFs and DDNFs exists. That is, V * is a fixed point of G.
One major limitation of the above theorem is that it only applies to positive connectivity functions W(x, y) which would exclude a wide range of DNFs, for example, competing DNFs use almost exclusively negative weight functions as well as DNFs for working memories, see for instance Zibner et al. (2011). One remedy could be to prove the existence of weak fixed solutions of (5) using iterative processes without a positivity restriction. Indeed, let be a nonempty, closed and convex subset of l 2 , the space of infinite sequences v = (v 1 , v 2 , · · · ) such that v 2 2 = ∞ n=1 |v n | 2 < ∞. We recall that l 2 is a Hilbert space with the inner product v, w = ∞ n=1 v n w n .

DEFINITION 10. Consider a map T
DEFINITION 11. Let T : → l 2 be either a contraction or a non-expansive map. A Mann-type iterative process is a difference equation on defined as THEOREM 12. Suppose is a subset of l 2 and W(x, y) is bounded by W 0 . If G(v) is Lipschitz with Lipschitz constant K such that W 0 C ≤ 1, then the DDNFs (5) is a Mann-type iterative process with at least one weak solution.
PROOF. Here, we denote V n to mean V n (X) introduced earlier.
We start by rewriting certain quantities in vector and matrix form. Let d = L×M where as above, M = max 1≤k≤L M k and let S n = , where as above, we complete the vectors with zeros for M ℓ < j ≤ M. Then we have the dot product We also have the dot product

We observe that (5) is a Mann-type process written as
where T is the linear map on defined as T(V n ) = Ŵ·G(V n )+S n . It follows that for U n , V n ∈ Since by hypothesis CW 0 ≤ 1, T is either a contraction or a nonexpansive map. It is known (see Mann, 1953) that in this case, the Mann-type iterative process (9) has at least one weak solution V * , that is, there exists V * ∈ such that lim n→∞ V n , V = V * , V for all V ∈ l 2 .
The second equation in (9) is a generalization of Equation (7) in Quinton and Goffart (2017) and according to their description, it can be considered a predictive neural field (PNF) where T(V n ) is an internal projection corresponding to the activity required to cancel the lag of the neural field V n for a specific hypothetically observed trajectory, by exciting the field in the direction of the motion, ahead of the current peak location, and inhibiting the field behind the peak.

Stability Analysis of Hyperbolic Fixed Solutions
To discuss stability analysis for the DDNFs in (5), we are going to separate the analysis into a single-layer system and a multiple-layers system. For simplification purposes, we consider the following notations when necessary: If L = 1 and for any M 1 > 0, we will write K ij : = K (1,1) ij and K i,· : = K (1,1) i,· . If L > 1 and M k = M ℓ = 1 for all 1 ≤ k, ℓ ≤ L, we will write K (k,ℓ) : = K (k,ℓ) 11 .

Single-Layer DDNFs Model
Let x i ∈ . Suppose that there are M neurons interconnected and connected to the neuron located at x i . Let W ij be the intensity of the connection between the neuron at x i and the neuron at x j . Let be the weighted average rate of change of pulse emissions received or emitted by the neuron at x i , where V i, * : = V (1) i, * is a nontrivial solution of Equation (5) with L = 1. Some authors like Lin et al. (2009) specify the type of connection, by dividing the connectivities or synaptic strengths W ij into a group of receptive signals called "feed backward" and a group of emission signals called "feed forward." Here we make no such specification. Recall that α n = 1 − φ(h n ) = e −h n . Henceforth, we fix h n = h > 0 for all n so that 0 < α : = α n = e −h < 1.

THEOREM 13. Let
⊆ R be a bounded region. Let M be a positive integer. Let x = (x i ) 1≤i≤M be a vector of M interconnected points on via a connectivity function W(·, ·). Let V * (x) = (V i, * ) 1≤i≤M be a nontrivial fixed point of the DDNFs on .
PROOF. We have β l = β = | | M . Let x i ∈ and let V i, * be a nontrivial fixed point of the DDNFs with one layer. Let 1 ≤ j ≤ M and x j ∈ . The DDNFs with one layer in Equation (5) can be written as V n+1 (x) = f (V n (x)) = (V 1,n+1 , V 2,n+1 , · · · , V M,n+1 ) , and element-wise as The function f : R M → R is a continuously differentiable function since the function G : R → R is, and given 1 ≤ i, p ≤ M, we have that where δ ij is the Kronecker symbol with δ ij = 1 if i = j and δ ij = 0 if i = j. It follows that We define the adjacency matrix as the Jacobian matrix A is a M × M matrix and let ρ(A) be its spectral radius. Therefore, we have Since 0 < α < 1, the last equality is equivalent to It follows that if max 1≤i≤M |µ i | < 1, then we have that ρ(A) ≤ A ∞ < max 1, max 1≤i≤N |µ i | = 1 and consequently, V * (x) is asymptotically stable.

REMARK 14.
The condition max 1≤i≤M µ i < 1 links the size of the domain and other parameters such as the rate of change of the pulse emission function G, the connectivity function W(x, y) and the number of connected neurons M. Indeed, µ i = | | K i,· . Therefore, achieving . This means that to ensure stability of the fixed solution V * (x), it is enough to have weighted rates of change of pulse emission K ij = W ij G ′ (V * (x j )) be bounded by 1 for each neuron. This condition is obviously true if the neuron at x i is not connected with the neuron at x j since W ij would be zero.
In fact the latter corresponds to the trivial fixed solution V * (x) = 0.

Special Case of Single Layer Model With a Heaviside Pulse Emission Function
Suppose the pulse emission function G(x) is the Heaviside function G 2 (v, v 0 ). There are two possible approaches to discuss the stability of the nontrivial solution. First, we observe that the derivative G ′ 2 (v, v 0 ) of G 2 (v, v 0 ) exists in the sense of distribution and is given as This means that for a nontrivial solution, the adjacency matrix A defined above is reduced to the diagonal matrix A = diag(α). It follows that ρ(A) = α < 1 and the nontrivial solution V * (x) is asymptotically stable. Another approach is to note that Equation (5) with one layer (L = 1) can be written as where Moreover, we have Suppose that for all n ∈ N and for all x ∈ , S n (x) and W(x, y) are bounded by say S and W, respectively. Let Y m = (1 − α)[S + | | W]. For all 1 ≤ i ≤ M, for x i ∈ , and n ∈ N we have Y i,n+1 ≤ Y m , and This shows that lim n→∞ V i,n exists and V(x) = lim n→∞ V n (x) defines the state of the field overtime. If lim n→∞ V n (x) < 0, then the field settles to an inhibitory phase overtime, and if lim n→∞ V n (x) > 0, the field settles in an excitatory phase overtime. In general S i,n = ν+U i,n where ν is a negative real number referred to as the resting level that ensures that the DNFs produce no output in the absence of external input U i,n . If U i,n ≡ 0 and V 0 (x) < 0 for all x ∈ , then V i,n+1 = α n+1 V i,0 + ν, since λ i,n−k = 0 for all k = 0, · · · , n and for all 1 ≤ i ≤ M. Hence we will have lim n→∞ V i,n = ν, see section 3.5.1 below for an illustration.

Multiple-Layers DDNFs Model
Let 1 ≤ k, m ≤ L. Consider the DDNFs with L layers given above and let x i , x p on the kth layer.
Consider a fixed point solution V (k) i, * . Given a layer 1 ≤ k ≤ L and a neuron at position x i on it, there are two sources of variability for the potential V (k) i,n+1 = [f k (V n (X))] i : firstly, the variability due to the M k neurons on the kth layer connected to the neuron at position x i , and secondly, the variability due to the neurons located on possibly two layers (k − 1) and (k + 1) connected to the kth layer. Therefore, let 1 ≤ m ≤ L and let x q be a point on the mth layer. By the chain rule, we have where the variability on the kth layer is From Equation (7), we obtain the variability due to layers k − 1 and k + 1: and (12) Consequently, A (k,m) = 0 if m / ∈ L k , where 0 represents the zero matrix. Therefore, the super-adjacency matrix can be written as Clearly, A is a block tridiagonal matrix that reduces to a block diagonal matrix if the pulse emission rate function G is the Heaviside function and to single M × M diagonal matrix if L = 1.
There is a rich literature on the subject of tridiagonal matrices and the research is still on-going. There is no close form formula for the eigenvalues of A since they are not obvious to calculate, except for some special cases.

Special Case: Single Neuron Multiple-Layers DDNFs
It special case that is worth mentioning since all L layers have exactly one neuron and they are interconnected. In fact in this case, M k = 1, β k = | | for 1 ≤ k ≤ L, and the super adjacency matrix reduces to an L × L tridiagonal matrix where from Equations (11) and (12), one has for 2 ≤ k ≤ L .
We observe that K (k,k) is the weighted rate of change of the pulse emission function on the kth layer, and K (k,k+1) can be regarded as the transitional weighted rate of change of the pulse emission function from layer k to layer k + 1. Likewise, K (k−1,k) is the transitional weighted rate of change of the pulse emission function from layer k − 1 to layer k. From Molinari (2008), the eigenvalues of A are given by the equation (13) Suppose further that a 1 = a 2 = · · · = a L = a, b 1 = b 2 = · · · , b L−1 = b, and c 1 = c 2 = · · · c L−1 = c. This means that the weighted rates of change of the pulse emission function K (k,k) = K 0 for 1 ≤ k ≤ L, K (k−1,k) = K 1 for 1 ≤ k ≤ L − 1, and K (k,k+1) = K 2 for 2 ≤ k ≤ L. From Kulkarni et al. (1999), the eigenvalues of A will given by We therefore have the following result. 3. The fixed point V * (X) is maybe stable or unstable if max 1≤k≤L |µ k | = 1.

PROOF. By hypothesis, a
From Thereom 3, the proof is complete.

REMARK 16.
1. It is worth observing that in case L = 1 and M = 1, then K 1 = K 2 = 0 and thus µ 1 = α + (1 − α) | | K 0 . We see that |µ 1 | < 1 if | | K 0 < 1 yielding the condition obtained in the one-layer-model. 2. We also note that in fact max 1≤k≤L |µ k | = [α + (1 − α) | | K 0 ] 1 + 2 | | K 1 K 2 and therefore the stability links all the parameters of system. 3. This special cases is also important in that we can imagine a large number of neurons supposedly placed on individual layers and interconnected in a forward or backward manner only and each receiving external input. In case the first and last neurons are connected so that the system forms a weighted closed graph, like in Figure 4, where the thickness represents the weights. Then the super-adjacency matrix given as According to Molinari (2008), the eigenvalues are given by 3.4.2. Special Case: Single Neuron Two-Layer DDNFs We now discuss a single neuron two-layer DDNFs. In this case, we still have M k = 1 for k = 1, 2 and L = 2. It follows that the super-adjacency matrix is given as A = a 1 b 1 c 1 a 2 where from FIGURE 6 | Here U i,n = 0 and G(v) = G 2 (v). From (A), we observe that system alternating between inhibitory and excitatory phases. (B,D) Show that for fixed x = 9.548, the system converges overtime. In (C), we clearly see the oscillatory regime of the system in the space domain for fixed n = 98.
Equations (11) and (12), we have We have the following stability result.
THEOREM 17. Consider a two-layer DDNFs with one neuron each. Then the nontrivial solution V * (X) is asymptotically stable if and if the following conditions are satisfied: PROOF. By the determinant-trace analysis, see for instance Elaydi (2007), it is enough to prove that the above conditions are equivalent to tr(A) − 1 < det(A) < 1 .
FIGURE 7 | Here U i,n = (2π) −1/2 e − 1 2 ·x 2 i and G(v) = G 1 (v). From (A), we observe that system has an excitatory phase around 0 and an inhibitory phase farther away. (B,D) Show that for fixed x = 2.111, the system converges overtime. In (C), we clearly observe a high level excitation occurring around 0, and the system settling back to its resting phase farther away from 0 for fixed n = 38.

Simulations
There are multiple parameters that affect the stability of a fixed point solution to the DDNFs: the choice of the kernel function W(x, y), the pulse emission rate function G, parameters such the size of the field | |, the time scale function φ(h), and the number of points per layer M, and the length of time N. In these simulations, we will consider the = [−20, 20], h = 0.8, N = 100, M = 200, and the W(x, y) = σ + e −0.5(x−y) 2 /σ 2 1 − σ − e −0.5(x−y) 2 /σ 2 2 , with σ + = 4, σ − = 1.5, σ 1 = 1, and σ 2 = 4.5. We select a resting phase ν = −0.5. To initialize our dynamical system, we will choose V i,0 = −1.5 for i = 1, 2, · · · , M. In Figures 5-8, below, (a) is the three dimension representation of V n (x) : = V(x, n), (b) represents V n (x) as a function of n for a given x value, (c) represents V n (x) as function of x for a given n value, and (d) represents the cobweb diagram for a given x value, namely a plot of V n+1 (x) vs. V n (x) for which the green dot represents the starting point V 0 (x) = −1.5 and the red arrows connecting the points with coordinates [V n (x), V n (x)] and [V n (x), V n+1 (x)] to show the evolution of the dynamical system overtime. Convergence occurs when the arrows stop evolving, and oscillations occurs when they circle around indefinitely. In the multi-layer case, we let L = 250. In this simulation, we consider single-layer DDNFs with U i,n = 0 with a Heaviside pulse emission function, see Figure 5 below.

Single-Layer With Sigmoid Function and No External Input
In this simulation, we consider single-layer DDNFs with U i,n = 0 but with a sigmoid pulse emission function, see Figure 6 below.

Single-Layer With Heaviside Function With Gaussian External Input
In this simulation, we consider single-layer DDNFs with a Heaviside Pulse emission function with a Gaussian external input U i,n = (2π) −1 e 1 2 x 2 i , see Figure 7 below.

Single-Layer With Sigmoid Function With Gaussian External Input
In this simulation, we consider single-layer DDNFs with a Sigmoid Pulse emission function with a Gaussian external input U i,n = (2π) −1 e 1 2 x 2 i , see Figure 8 below.

Multiple-Layer With Sigmoid Function With No External Input
In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with no external input, U i,n = 0, see Figure 9 below.

Multiple-Layer With Sigmoid Function With Gaussian External Input
In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with a Gaussian external input, U i,n = 0, see Figure 10 below.

Effect of α
In this simulation, we consider V N (x, α) where N = 100 for different values of α. Since the convergence to the fixed solution is quick, we expect V N (x, α) to be independent of α at n = N = 100, in that any cross-section of the three dimensional plot V N (x, α) should be the same. This is shown in Figure 11 below for a G(v) = G 1 (v) and U i,n = (2π) −1/2 e −

DISCUSSION
In this paper, we have proposed a discrete model for dynamical neural fields. We have proposed another proof of the existence of nontrivial solutions for dynamic neural fields. We have discussed its stability of nontrivial solutions of dynamic neural field for some particular cases. The simulations we propose capture very well the notion of stability on dynamic neural field. Below are some observations drawn from this analysis.
1. We observe that the cases of no external input U i,n with one layer and multiple neurons and multiple layers with one neuron each are very similar, albeit the shape of the , we observe that system is quickly excitatory and settles into constant "Gaussian" state overtime. (B,D) Show that for fixed k = 114, the system converges overtime. In (C), we clearly see the "Gaussian" state for n = 45 with peaks located at around L = 250.