# Discrete Dynamics of Dynamic Neural Fields

- Department of Mathematics, Trinity University, San Antonio, TX, United States

Large and small cortexes of the brain are known to contain vast amounts of neurons that interact with one another. They thus form a continuum of active neural networks whose dynamics are yet to be fully understood. One way to model these activities is to use dynamic neural fields which are mathematical models that approximately describe the behavior of these congregations of neurons. These models have been used in neuroinformatics, neuroscience, robotics, and network analysis to understand not only brain functions or brain diseases, but also learning and brain plasticity. In their theoretical forms, they are given as ordinary or partial differential equations with or without diffusion. Many of their mathematical properties are still under-studied. In this paper, we propose to analyze discrete versions dynamic neural fields based on nearly exact discretization schemes techniques. In particular, we will discuss conditions for the stability of nontrivial solutions of these models, based on various types of kernels and corresponding parameters. Monte Carlo simulations are given for illustration.

## 1. Introduction

To model a large amount of cells randomly placed together with a uniform volume density and capable of supporting various simple forms of activity, including plane waves, spherical and circular waves, and vortex effects, Beurle (1956) proposed a continuous model consisting of a partial differential equation, that can capture some neurons behaviors like excitability at the cortical level. Amari (1977) and Wilson and Cowan (1972) subsequent modifications did allow for new features such as inhibition. This essentially gave birth to the theory of dynamic neural fields (DNFs) with far reaching applications. The literature on the DNFs is rather rich, and the trends nowadays are toward cognition, plasticity, and robotics. In cognitive neuroscience for instance, DNFs have enabled analyses of Electroencephalograms (Nunez and Srinivasan, 2006), short term memory (Camperi and Wang, 1998), visual hallucinations (Ermentrout and Cowan, 1979; Tass, 1995), visual memory (Schmidt et al., 2002). Durstewitz et al. (2000) have proposed neurocomputational models of working memory using single layer DNFs and shown that the temporal dynamics they obtained compare well with the *in vivo* observations. Simmering et al. (2008) and Perone and Simmering (2019) used DNFs to achieve spatial cognition by successfully simulating infants' performance in some tasks. Quinton and Goffart (2017) used nonlinearities in DNFs to propose a unifying model for goal directed eye-movements that allow for the implementation of qualitatively different mechanisms including memory formation and sensorimotor coupling. Plasticity in neuroscience can be thought of as the ability of the brain to adapt to external activities by modifying some of its structure. Connections between plasticity and DNFs have been made in studies on intrinsic plasticity, see Strub et al. (2017) or parallel works such as Neumann and Steil (2011), Pozo and Goda (2010), and the references therein. Robotics has been a great niche for DNFs with the works of Bicho et al. (2000), Erlhagen and Schöner (2001), Erlhagen and Bicho (2006), and Bicho et al. (2010).

To recall the theoretical aspects of DNFs, let Ω ⊆ ℝ^{d} be a manifold where *d* is positive integer. In the presence of neurons located on Ω at time *t* arranged on *L* layers, the average potential function *V*_{k}(*x*_{k}, *t*) is often used to understand the continuous field on the *k*th layer. *V*_{k}(*x*_{k}, *t*) is the average membrane potential of the neuron located at position *x*_{k} at time *t* of the *k*th layer. When *L* = 1, *V*(*x*_{k}, *t*) can also be understood as the synaptic input or activation at time *t* of a neuron at position or direction *x*_{k}. It satisfies the equation (see Amari, 1977) which is given as

where *W*_{kl}(*x*_{k}, *x*_{ℓ}) is the intensity of the connection between a neuron located at position *x*_{k} on the *k*th layer with a neuron a position *x*_{ℓ} on the ℓth layer, *G*[*V*_{ℓ}(*x*_{ℓ}, *t*)] is the pulse emission rate (or activity) at time *t* of the neuron located at position *x*_{ℓ} on the ℓth layer. *G* is often chosen as a monotone increasing function. *S*_{k}(*x*_{k}, *t*) represents the intensity of the external stimulus at time *t* arriving on the neuron at position *x*_{k} on the *k*th layer, see Figure 1 below.

**Figure 1**. A representation of the DNFs with four layers and 34 neurons. The green dots are the neurons with intra-layer connections represented by the red lines, and inter-layer connections represented by the dashed lines.

Most of the mentions of discrete systems involving DNFs are of Euler forward type discretization—see Erlhagen and Bicho (2006), Quinton and Goffart (2017), and Beim and Hutt (2014)—with simulations as the main objective. Stability analysis of solutions of DNFs in the sense of discrete dynamical systems are either understudied or often ignored. Beim and Hutt (2014) studied an heterogeneous aspect of DNFs and found existence of attractors and saddle nodes for solutions of (1). There are also discussion of DNFs as neural networks. For instance, Sussilo and Barack (2013) discussed continuous time recurrent neural network (RNN) as differential equation of the form

where *V*_{k} is an *M* dimensional state of activations and *G*(*V*_{j}) are the firing rates, and ${\eta}_{k}={\displaystyle \sum _{j=1}^{I}}{B}_{kj}{u}_{j}$ represents the external stimulus received from *I* inputs *u*_{j} and synaptic weights *B*_{kj}. The weights *L*_{kj} are selected from a normal distribution. In a sense, the Equation (1) can be considered a continuous time (recurrent neural networks) RNN. Authors such as Elman (1990) and Williams and Zipser (1990), and most recently, Durstewitz (2017)—see Equation (9.8) therein, have discussed discrete time RNN formulated as a difference equation of the form

where *G* is a monotone increasing function, *W*_{i0} is a basic activity level for the neuron at position *x*_{i}, the *W*_{ij}'s are weights representing the connection strength between neuron at position *x*_{i} and neuron at position *x*_{j}, and ${\eta}_{n}^{(i)}$ is some external stimulus at time *n* arriving on the neuron at position *x*_{i}. So essentially, after discretization, (1) may yield not only a discrete time dynamical system, but also a RNN. In fact, Jin et al. (2021) considered a discrete equivalent in the space for Equation (1) for *L* = 2 while maintaining the time as continuous and discussed it in the context of unsupervised learning which could help understand plasticity in the brain. The two main goals of this paper are:

1. to propose a discrete dynamic neural field model that is both numerically convergent and dynamically consistent. The model is based on nearly exact discretization techniques proposed in Kwessi et al. (2018) that ensure that the discrete system obtained preserves the dynamics of the continuous system it originated from.

2. to propose a rigorous and holistic theoretical framework to study and understand the stability of dynamic neural fields.

The model we propose has the advantage of being general and simple enough to implement. It also captures different generalizations of DNFs such as predictive neural fields (PNFs) and active neural fields (ANFs). We further show that the proposed model is in effect a Mann-type (see Mann, 1953) local iterative process which admits weak-type solutions under appropriate conditions. The model can also be written as a recurrent neural network. Stability analysis shows that in some case, the model proposed can be studied using graph theory.

The remainder of the paper is organized as follows: in section 2, we propose our nearly exact discretization scheme for DNFs. In section 3.1, we make a brief overview of the essential notions for the stability analysis of discrete dynamical systems, in section 3.2, we discuss the existence on nontrivial fixed solutions for DDNFs, in sections 3.3 and 3.4, we discuss stability analysis for a one and multiple layers discrete DNFs. In section 3.5, we propose simulations and their interpretation for neural activity and in section 4, we make our concluding remarks.

## 2. Materials and Methods

### 2.1. Discrete Schemes for DNFs

For sake of simplification, we will use the following notations: given a positive integer *L*, and integers 1 ≤ *k*, ℓ ≤ *L, M*_{k}, *M*_{ℓ}, we consider indices 1 ≤ *i*_{k} ≤ *M*_{k}, 1 ≤ *j*_{ℓ} ≤ *M*_{ℓ}. We put

where *h*_{n} = *t*_{n+1} − *t*_{n}, and $\varphi ({h}_{n})=1-{e}^{-{h}_{n}}$ is the time scale of the field. We obviously assume that all the layers of the domain Ω have the same time scale. |Ω| represents the size of the field Ω that henceforth we will assume to be finite. *x*_{i} represents a point on the *k*th layer and *y*_{j} a point on the ℓth layer. *M*_{k} represents the number of interconnected neurons on the *k*th layer of the domain Ω, and ${\beta}_{k}=\frac{|\Omega |}{{M}_{k}}$ represents the spatial discretization step on the *k*th layer of the domain Ω. When *L* = 1, we will adopt the notations ${V}_{i,n}^{(1)}={V}_{i,n},{S}_{i,n}^{(1)}:={S}_{i,n}$ and ${W}_{ij}^{(1,1)}:={W}_{ij}$. Henceforth, for sake of clarity and when needed, we will represent vectors or matrices in bold.

To obtain a discrete model for DNFs, we are going to use nearly exact discretization schemes (NEDS) (see Kwessi et al., 2018). We note that according to this reference, Equation (1) is of type *T*_{1}, therefore, the left-hand-side will be discretized as

For the right-hand-side, we write a Riemann sum for the integral in Equation (1), over the domain Ω as

This means that, for a given *L* ∈ ℕ, for integers 1 ≤ *k* ≤ *L, M*_{k}, and 1 ≤ *i*_{k} ≤ *M*_{k}, a discrete dynamic neural field (DDNFs) model can be written as

Staying within the confines of the layers' architecture proposed by Amari (1977), in a DDNFs with *L* ≥ 3, we observe that the activity on neurons located on the bottom (or top) layer depends on the interconnected neurons on that layer and their connections to the layer above (or below). First, we define

Middle layers are connected to two layers, one above and one below. This means that at time *t*_{n}, a neuron at position *x*_{i} on the first layer receives external input ${S}_{i,n}^{(1)}$, intra-layer signals ${W}_{ij}^{(1,1)}$ from *M*_{1} neurons on the first layer, and inter-layer signals ${W}_{ij}^{(1,2)}$ from *M*_{2} neurons on the second layer. Likewise, a neuron at position *x*_{i} on the *L*th layer receives external input ${S}_{i,n}^{(L)}$, inter-layer signals ${W}_{ij}^{(L-1,L)}$ from *M*_{L−1} neurons on the (*L* − 1)th layer, and intra-layer signals ${W}_{ij}^{(L,L)}$ from *M*_{L} neurons on the *L*th layer. For 1 < *k* < *L*, we have ${W}_{ij}^{(k,\ell )}=0$ if ℓ ∉ *L*_{k} and ${W}_{ij}^{(k,\ell )}\ne 0$ if ℓ ∈ *L*_{k}. This means that at time *t* = *t*_{n}, a neuron at position *x*_{i} on the *k*th layer receives external input ${S}_{i,n}^{(k)}$, inter-layer signals ${W}_{ij}^{(k-1,k)}$ from *M*_{k−1} neurons on the (*k* − 1)th layer, intra-layer signals ${W}_{ij}^{(k,k)}$ from *M*_{k} neurons on the *k*th layer, and inter-layer signals ${W}_{ij}^{(k,k+1)}$ from *M*_{k+1} neurons on the (*k* + 1)th layer.

Further, Equation (5) has a matrix notation which helps with the study of stability analysis. Indeed, let $M=\underset{1\le k\le L}{\text{max}}{M}_{k}$ and *d* = *L* × *M*, let **X = (X_{1}, X_{2}, ⋯, X_{L})** ∈ Ω, where for 1 ≤

*k*≤

*L*,

**= {(**

**X**_{k}*x*

_{k1},

*x*

_{k2}, ⋯,

*x*

_{kM})} represents the positions of

*M*

_{k}neurons on the

*k*th layer. We are assuming without loss of generality that

*x*

_{ki}= 0 if

*M*

_{k}<

*i*≤

*M*to have a full

*L*×

*M*matrix. Now consider the matrix

formed by the column vectors ${V}_{n}^{(k)}(X)={\left({V}_{1,n}^{(k)},{V}_{2,n}^{(k)},\cdots \phantom{\rule{0.3em}{0ex}},{V}_{M,n}^{(k)}\right)}^{T}$. In this case, Equation (5) can be written in a matrix form as

with an element-wise notation

for 1 ≤ *i* ≤ *M* and for 1 ≤ *k* ≤ *L*. For more complex layers' architectures, the interested reader can refer to De Domenico et al. (2013) and the references therein, where tensor products are used instead of matrices.

### 2.2. Pulse Emission Rate Functions

Amari (1977) proved the existence of nontrivial solutions for DNFs when *G* is the Heaviside function. Further studies have shown that in fact this is true for a large classes of functions *G*. As we stated earlier, functions *G* satisfying the Hammerstein conditions *G*(*v*) ≤ μ_{1}*v* + μ_{2} are good candidates. However most practitioners use either the Heaviside function or the sigmoid function. For some θ > 0 and *v*_{0} ≥ 0, the sigmoid function, is defined as ${G}_{1}(v,{v}_{0})={(1+{e}^{-\theta (v-{v}_{0})})}^{-1}$, and Heaviside function is defined as *G*_{2}(*v, v*_{0}) = *I*_{(v0, ∞)}(*v*), see Figure 2 below. Here *I*_{A} is the indicator function and is given as *I*_{A}(*x*) = 1 if *x* ∈ *A* and *I*_{A}(*x*) = 0 otherwise. This shows that the assumption that *G* be nonnegative and increasing is not unrealistic. In fact we can transform many functions to bear similar properties of the Heaviside and sigmoid functions, see for instance Kwessi and Edwards (2020).

**REMARK 1**. *We observe that the choice of the sigmoid activation function* ${G}_{1}(v)={(1+{e}^{-\theta (v-{v}_{0})})}^{-1}$ *is widely preferred in the literature for its bounded nature. Another reason is the fact that it is also suitable when the* ${X}_{n}^{(i)}$ *are binary, that is, they may take values 0, 1, where 0 and 1 represents, respectively active and non active neurons at time n. In this case*, ${G}_{1}\left({W}_{i0}+{\displaystyle \sum _{j=1}^{M}}{W}_{ij}{X}_{n}^{(j)}+{\eta}_{n}^{(i)}\right)=Pr\left({X}_{n+1}^{(i)}=1\right)$ *would represent the probability that there is an activity on neuron at position x*_{i} *at time n* + 1.

### 2.3. Connection Intensity Functions

There are a variety of connection intensities functions (or kernels) that one can choose from. This include a Gaussian kernel $Gau(x,y,\sigma )=\frac{1}{\sigma \sqrt{2\pi}}{e}^{-\frac{1}{2{\sigma}^{2}}{\parallel x-y\parallel}^{2}}$, the Laplacian kernel defined as $Lap(x,y,\sigma )=\frac{1}{2\sigma}{e}^{-\frac{1}{\sigma}\parallel x-y\parallel}$, or the hyperbolic tangent kernel *Thyp*(*x, y*, β, *r*) = 1 + tanh(β*x*^{T} · *y* + *r*) where ∥·∥ is a norm in ℝ^{d}. Note that as defined, the hyperbolic tangent kernel is not symmetric, however, for a suitable choice of β and *r*, one may obtain a nearly symmetric kernel. Indeed, if *r* < 0 and β is close to zero, we have *Thyp*(*x, y*, β, *r*) ≈ (1 + tanh(*r*))*e*^{−β∥x−y∥2}, see for instance Lin and Lin (2003). This kernel is very useful in non-convex problems and support vector machines.

In practice however, it is very common to select the function *W*(*x, y*) as the difference between either of the functions above, which results in a “Mexican hat” shaped function, that is, either $W(x,y)={\sigma}^{+}Gau(x,y,{\sigma}_{1})-{\sigma}^{-}Gau(x,y,{\sigma}_{2})$, or $W(x,y)={\sigma}^{+}Lap(x,y,{\sigma}_{1})-{\sigma}^{-}Lap(x,y,{\sigma}_{2})$, or $W(x,y)={\sigma}^{+}Thyp(x,y,{\beta}_{1},r)-{\sigma}^{-}Thyp(x,y,{\beta}_{2},r)$. Figure 3 below is an illustration of these kernels for selected parameters. This choice will ensure that neurons within the same cell assembly are connected reciprocally by high synaptic weights σ^{+}, whereas neurons not belonging to the same assembly are connected by low synaptic weights σ^{−} (see for instance Durstewitz et al., 2000; Quinton and Goffart, 2017; Wijeakumar et al., 2017; Jin et al., 2021).

**Figure 3**. An illustration of the connection intensity functions defined above for ${\sigma}^{+}=4.5,{\sigma}^{-}=1.5,{\sigma}_{1}=1,{\sigma}_{2}=4.5,{\beta}_{1}=0.1,{\beta}_{2}=0.5$, and *r* = − 0.1.

## 3. Existence and Stability of Nontrivial Solutions of the DDNFs

### 3.1. Stability Analysis

We recall the essential notions for the stability analysis of a discrete dynamical system *V*_{n+1} = *F*(*V*_{n}) where *F* : ℝ^{d} → ℝ is continuously differentiable map.

**DEFINITION 2**.

*(i) A point V*_{∗} *is said to be a fixed point for F if F*(*V*_{∗}) = *V*_{∗}.

*(ii) The orbit* ${O}(V)$ *of a point V is defined as* ${O}(V)=\left\{{F}^{n}(V),\mathrm{\text{where}}\text{}n\in {\mathbb{Z}}_{+}\right\}$ *with* ${F}^{n}=\underset{n\text{}times}{\underbrace{F\u25e6F\u25e6\cdots \u25e6F}}$.

*(iii) The spectral radius* ρ(*A*) *a matrix A is a maximum of its absolute eigenvalues*.

*(iv) A fixed point V*_{∗} *for F is said to be stable if there exists a neighborhood of V*_{∗} *whose trajectories are arbitrarily close to V*_{∗}, *that is, for all* ϵ > 0, $\exists \delta >0:|V-{V}_{\ast}|<\delta \Rightarrow |{F}^{n}(V)-{V}_{\ast}|<\u03f5$.

*(v) A fixed point V*_{∗} *for F is said to be attracting if there exists a neighborhood of V*_{∗} *whose trajectories converge to V*_{∗}, *that is, for all* ϵ > 0, $\exists \delta >0:|V-{V}_{\ast}|<\delta \Rightarrow \underset{n\to \infty}{\text{lim}}{F}^{n}(V)={V}_{\ast}$.

*(vi) A fixed point V*_{∗} *for F is said to be asymptotically stable if it is both stable and attracting*.

*(v) A fixed point V*_{∗} *for F is said to be unstable if it is not stable*.

**THEOREM 3**. *Let V*_{n+1} = *F*(*V*_{n}) *be a discrete dynamical system with a fixed point V*_{∗} *and let A* = *JF*(*V*_{∗}) *be its Jacobian matrix*.

*1. If* ρ(*A*) < 1, *then V*_{∗} *is asymptotically stable*.

*2. If* ρ(*A*) > 1, *then V*_{∗} *is unstable*.

*3. If* ρ(*A*) = 1, *then V*_{∗} *may be stable or unstable*.

### 3.2. Existence of Nontrivial Fixed Solutions for the DDNFs Model

A nontrivial solution for the DNFs is a non constant solution for which the left-hand side of Equation (1) is equal to zero. For the DDNFs, it can be formulated in the following way: for a given *x*_{i} on the *k*th layer, the DDNFs in (5) has a nontrivial fixed point ${V}_{i,\ast}^{(k)}$ if for all *n* ≥ 1, we have ${V}_{i,n+1}^{(k)}=F({V}_{i,n}^{(k)})={V}_{i,n}^{(k)}:={V}_{i,\ast}^{(k)}$. This is satisfied when

The right-hand side of Equation (7) is a Riemann sum for

So the existence of a nontrivial fixed point for the continuous DNFs Equation (1) implies the existence of a non trivial fixed solution for the DDNFs. According to Hammerstein (1930), such a nontrivial solution exists if *W*(*x, y*) is symmetric positive definite and *G* satisfies *G*(*v*) ≤ μ_{1}*v* + μ_{2}, where 0 < μ_{1} < 1, μ_{2} > 0 are constants.

We will use a different approach to prove existence of nontrivial solution of DNFs. Let $(S,{A},\mu )$ be a measure space. We will consider the space *L*^{1}(*S*) of equivalent classes of integrable functions on *S*, endowed with the norm $\parallel f\parallel ={\int}_{S}|f(x)|dx$, where *dx* = μ(*dx*).

**DEFINITION 4**. *A stochastic kernel W is a measurable function W* : *S* × *S* → ℝ *such that*

*1. W*(*x, y*) ≥ 0, ∀*x, y* ∈ *S*,

*2*. ${{\displaystyle \int}}_{S}W(x,y)dy=1,\forall x\in S$.

**DEFINITION 5**. *A density function f is an element of L*^{1}(*S*) such that *f* : *S* → ℝ such that ${\int}_{S}fd\mu =1$.

**DEFINITION 6**. *A Markov operator P on L*^{1}(*S*) *is a positive linear operator that preserves the norm, that is, P* : *L*^{1}(*S*) → *L*^{1}(*S*) *is linear such that*

*1. Pf* ≥ 0 *if f* ≥ 0 and *f* ∈ *L*^{1}(*S*),

*2*. ∥*Pf*∥ = ∥*f*∥.

**REMARK 7**.

*We note that f* ≥ 0 *(and Pf* ≥ 0*) represents equivalent classes of integrable functions on S that are greater or equal to 0 except on a set of measure zero and thus the inequality should be understood almost everywhere*.

*We also note that when an operator P satisfies the second condition above, it is sometimes referred to as a non-expansive operator*.

**THEOREM 8**. *(Corollary 5.2.1–Lasota and Mackey, 2004). Let* $S\equiv (S,{A},\mu )$ *be measure space and P* : *L*^{1}(*S*) → *L*^{1}(*S*) *be a Markov operator. If for some density function f, there is g* ∈ *L*^{1}(*S*) *such that P*^{n}*f* ≤ *g*, *for all n, then*

Now we use Definitions 4, 5, and 6 and Theorem 8 above to state our result on the existence of nontrivial fixed solution of DNFs.

**THEOREM 9**. *Let* $S:=\Omega ={\prod}_{i=1}^{d}\left[{a}_{i},{b}_{i}\right]\subset {\mathbb{R}}^{d}$. *Consider a DNFs with no external stimulus [S*(*x*) = 0*]. If W*(*x, y*) *is positive and bounded, then the DNFs (and consequently the DDNFs) has at least one nontrivial fixed solution V*_{∗}.

PROOF. Let ${A}={B}(S)$ be the Borel σ-algebra on *S* generated by the family $\left\{{\prod}_{i=1}^{d}({l}_{i},{u}_{i}):{a}_{i}<{l}_{i}<{u}_{i}<{b}_{i}\right\}$ and μ be the associated Borel measure. A nontrivial solution *V*(*x*) for the DNFs satisfies the equation

Consider the linear operator *P* defined on *S* as

Clearly, *P* is a Markov operator and we have

with *W*_{1}(*x, y*) = *W*(*x, y*) and ${W}_{n+1}(x,y)={\displaystyle {\int}_{S}}W(x,z){W}_{n}(z,y)dz$ for *n* ≥ 1. Moreover, by the positivity and boundedness of *W*, there exists *M* > 0 such that for a density function *f*, we have *P*^{n}*f*(*x*) ≤ *M*, ∀*n* ≥ 1. By the above Theorem, there exists a density *f*_{∗} such that *Pf*_{∗} = *f*_{∗}. We conclude by observing that *f*_{∗}(*x*) = *G*(*V*_{∗}(*x*)) = *V*_{∗}(*x*) for some function *V*_{∗}(*x*) defined on Ω and thus a nontrivial solution for the DNFs and DDNFs exists. That is, *V*_{∗} is a fixed point of *G*.

One major limitation of the above theorem is that it only applies to positive connectivity functions *W*(*x, y*) which would exclude a wide range of DNFs, for example, competing DNFs use almost exclusively negative weight functions as well as DNFs for working memories, see for instance Zibner et al. (2011). One remedy could be to prove the existence of weak fixed solutions of (5) using iterative processes without a positivity restriction. Indeed, let Ω be a nonempty, closed and convex subset of *l*^{2}, the space of infinite sequences * v* = (

*v*

_{1},

*v*

_{2}, ⋯) such that $\parallel v{\parallel}_{2}^{2}=\sum _{n=1}^{\infty}|{v}_{n}{|}^{2}<\infty $. We recall that

*l*

^{2}is a Hilbert space with the inner product $\langle v,w\rangle =\sum _{n=1}^{\infty}{v}_{n}\overline{{w}_{n}}$.

**DEFINITION 10**. *Consider a map T* : Ω → *l*^{2}. *Suppose there is constant C such that* ∥*T*(**v_{1}**) −

*T*(

**)∥**

**v**_{2}_{2}≤

*C*∥

**∥**

**v**_{1}−**v**_{2}_{2}.

*Then*

*1. T is called a Lipschitz map if C* > 0.

*2. T is called a contraction map if 0 < C* < 1.

*3. T is called a non-expansive map if C* = 1.

**DEFINITION 11**. *Let T* : Ω → *l*^{2} *be either a contraction or a non-expansive map. A Mann-type iterative process is a difference equation on* Ω *defined as*

**THEOREM 12**. *Suppose* Ω *is a subset of l*^{2} *and W*(*x, y*) *is bounded by W*_{0}. *If G*(*v*) *is Lipschitz with Lipschitz constant K such that W*_{0}*C* ≤ 1, *then the DDNFs (5) is a Mann-type iterative process with at least one weak solution*.

PROOF. Here, we denote *V*_{n} to mean *V*_{n}(* X*) introduced earlier. We start by rewriting certain quantities in vector and matrix form. Let

*d*=

*L*×

*M*where as above, $M=\underset{1\le k\le L}{\text{max}}{M}_{k}$ and let ${S}_{n}=\left\{{S}_{i,n}^{k},1\le k\le L,1\le i\le M\right\}$. Let $G\left({V}_{n}^{(\ell )}\right)={\left(G({V}_{j,n}^{(\ell )})\right)}_{1\le j\le M}$ and ${\Gamma}_{i}^{(k,\ell )}={\left({\beta}_{\ell}{W}_{i,j}^{(k,\ell )}\right)}_{1\le j\le M}$, where as above, we complete the vectors with zeros for

*M*

_{ℓ}<

*j*≤

*M*. Then we have the dot product

Similarly, let ${\Gamma}_{i}^{(k)}={\left({\Gamma}_{i}^{(k,\ell )}\right)}_{\ell \in {L}_{k}}$ and $G({V}_{n})={\left(G({V}_{n}^{(\ell )})\right)}_{\ell \in {L}_{k}}$. We also have the dot product

Now we consider the matrix $\Gamma ={\left[{\Gamma}^{(1)},{\Gamma}^{(2)},\cdots ,{\Gamma}^{(L)}\right]}^{T}$ formed by block matrices ${\Gamma}^{(k)}=\left({\Gamma}_{1}^{(k)},{\Gamma}_{2}^{(k)},\cdots \phantom{\rule{0.3em}{0ex}},{\Gamma}_{M}^{(k)}\right)$ for 1 ≤ *k* ≤ *L*. We observe that (5) is a Mann-type process written as

where *T* is the linear map on Ω defined as *T*(*V*_{n}) = Γ · *G*(*V*_{n})+*S*_{n}. It follows that for *U*_{n}, *V*_{n} ∈ Ω

Since by hypothesis *CW*_{0} ≤ 1, *T* is either a contraction or a non-expansive map. It is known (see Mann, 1953) that in this case, the Mann-type iterative process (9) has at least one weak solution *V*_{∗}, that is, there exists *V*_{∗} ∈ Ω such that ${lim}_{n\to \infty}\langle {V}_{n},V\rangle =\langle {V}_{\ast},V\rangle $ for all *V* ∈ *l*^{2}.

The second equation in (9) is a generalization of Equation (7) in Quinton and Goffart (2017) and according to their description, it can be considered a predictive neural field (PNF) where *T*(*V*_{n}) is an internal projection corresponding to the activity required to cancel the lag of the neural field *V*_{n} for a specific hypothetically observed trajectory, by exciting the field in the direction of the motion, ahead of the current peak location, and inhibiting the field behind the peak.

### 3.3. Stability Analysis of Hyperbolic Fixed Solutions

To discuss stability analysis for the DDNFs in (5), we are going to separate the analysis into a single-layer system and a multiple-layers system. For simplification purposes, we consider the following notations when necessary:

If *L* = 1 and for any *M*_{1} > 0, we will write ${K}_{ij}:={K}_{ij}^{(1,1)}$ and ${\overline{K}}_{i,\xb7}:={\overline{K}}_{i,\xb7}^{(1,1)}$.

If *L* > 1 and *M*_{k} = *M*_{ℓ} = 1 for all 1 ≤ *k*, ℓ ≤ *L*, we will write ${K}^{(k,\ell )}:={K}_{11}^{(k,\ell )}$.

#### 3.3.1. Single-Layer DDNFs Model

Let *x*_{i} ∈ Ω. Suppose that there are *M* neurons interconnected and connected to the neuron located at *x*_{i}. Let *W*_{ij} be the intensity of the connection between the neuron at *x*_{i} and the neuron at *x*_{j}. Let

be the weighted average rate of change of pulse emissions received or emitted by the neuron at *x*_{i}, where ${V}_{i,\ast}:={V}_{i,\ast}^{(1)}$ is a nontrivial solution of Equation (5) with *L* = 1. Some authors like Lin et al. (2009) specify the type of connection, by dividing the connectivities or synaptic strengths *W*_{ij} into a group of receptive signals called “feed backward” and a group of emission signals called “feed forward.” Here we make no such specification. Recall that ${\alpha}_{n}=1-\varphi ({h}_{n})={e}^{-{h}_{n}}$. Henceforth, we fix *h*_{n} = *h* > 0 for all *n* so that $0<\alpha :={\alpha}_{n}={e}^{-h}<1$.

**THEOREM 13**. *Let* Ω ⊆ ℝ *be a bounded region. Let M be a positive integer. Let x* = (

*x*

_{i})

_{1≤i≤M}

*be a vector of M interconnected points on*Ω

*via a connectivity function W*(·, ·).

*Let V*

_{∗}

**(**

**x****)**= (

*V*

_{i,∗})

_{1≤i≤M}

*be a nontrivial fixed point of the DDNFs on*Ω.

PROOF. We have ${\beta}_{l}=\beta =\frac{|\Omega |}{M}$. Let *x*_{i} ∈ Ω and let *V*_{i,∗} be a nontrivial fixed point of the DDNFs with one layer. Let 1 ≤ *j* ≤ *M* and *x*_{j} ∈ Ω. The DDNFs with one layer in Equation (5) can be written as

and element-wise as

The function *f* : ℝ^{M} → ℝ is a continuously differentiable function since the function *G* : ℝ → ℝ is, and given 1 ≤ *i, p* ≤ *M*, we have that

where δ_{ij} is the Kronecker symbol with δ_{ij} = 1 if *i* = *j* and δ_{ij} = 0 if *i* ≠ *j*. It follows that

We define the adjacency matrix as the Jacobian matrix *A* = *Jf*[**V_{∗}(x)**] where

*A*= (

*a*

_{ij}) with ${a}_{ij}=\frac{\partial f({{V}_{\ast}(x))}_{i}}{\partial {V}_{j,n}}$.

*A*is a

*M*×

*M*matrix and let ρ(

*A*) be its spectral radius. Therefore, we have

Since 0 < α < 1, the last equality is equivalent to

It follows that if $\underset{1\le i\le M}{\text{max}}|{\mu}_{i}|<1$, then we have that $\rho (A)\le \parallel A{\parallel}_{\infty}<max\left\{1,\underset{1\le i\le N}{\text{max}}|{\mu}_{i}|\right\}=1$ and consequently, **V_{∗}(x)** is asymptotically stable.

**REMARK 14**.

*The condition* $\underset{1\le i\le M}{\text{max}}{\mu}_{i}<1$ *links the size of the domain* Ω *and other parameters such as the rate of change of the pulse emission function G, the connectivity function W*(*x, y*) *and the number of connected neurons M. Indeed*, ${\mu}_{i}=|\Omega |{\overline{K}}_{i,\xb7}$. *Therefore, achieving* $\underset{1\le i\le M}{\text{max}}{\mu}_{i}<1$ *requires* $\underset{1\le i\le N}{\text{max}}{\overline{K}}_{i\xb7}<\frac{1}{|\Omega |}$. *This means that to ensure stability of the fixed solution V*

_{∗}

**(**

**x****)**,

*it is enough to have weighted rates of change of pulse emission*${K}_{ij}={W}_{ij}{G}^{\prime}({V}_{\ast}({x}_{j}))$

*be bounded by 1 for each neuron. This condition is obviously true if the neuron at x*

_{i}

*is not connected with the neuron at x*

_{j}

*since W*

_{ij}

*would be zero. In fact the latter corresponds to the trivial fixed solution*

**V**_{∗}

**(**

**x****)**= 0.

#### 3.3.2. Special Case of Single Layer Model With a Heaviside Pulse Emission Function

Suppose the pulse emission function *G*(*x*) is the Heaviside function *G*_{2}(*v, v*_{0}). There are two possible approaches to discuss the stability of the nontrivial solution. First, we observe that the derivative ${G}_{2}^{\prime}(v,{v}_{0})$ of *G*_{2}(*v, v*_{0}) exists in the sense of distribution and is given as ${G}_{2}^{\prime}(v,{v}_{0})=\delta ({v}_{0})$ where δ(*v*_{0}) is the Dirac measure at *v*_{0}. It can be written as ${G}_{2}^{\prime}(v,{v}_{0})=0$ if *v* ≠ *v*_{0} and ${G}_{2}^{\prime}(v,{v}_{0})=\infty $ if *v* = *v*_{0}. This means that for a nontrivial solution, the adjacency matrix *A* defined above is reduced to the diagonal matrix *A* = diag(α). It follows that ρ(*A*) = α < 1 and the nontrivial solution *V*_{∗}(*x*) is asymptotically stable. Another approach is to note that Equation (5) with one layer (*L* = 1) can be written as

where

with

Moreover, we have

Suppose that for all *n* ∈ ℕ and for all *x* ∈ Ω, *S*_{n}(*x*) and *W*(*x, y*) are bounded by say *S* and *W*, respectively. Let *Y*_{m} = (1 − α)[*S* + |Ω|*W*]. For all 1 ≤ *i* ≤ *M*, for *x*_{i} ∈ Ω, and *n* ∈ ℕ we have |*Y*_{i,n+1}| ≤ *Y*_{m}, and

This shows that $\underset{n\to \infty}{\text{lim}}{V}_{i,n}$ exists and $V(x)=\underset{n\to \infty}{\text{lim}}{V}_{n}(x)$ defines the state of the field overtime. If $\underset{n\to \infty}{\text{lim}}{V}_{n}(x)<0$, then the field settles to an inhibitory phase overtime, and if $\underset{n\to \infty}{\text{lim}}{V}_{n}(x)>0$, the field settles in an excitatory phase overtime. In general *S*_{i,n} = ν+*U*_{i,n} where ν is a negative real number referred to as the resting level that ensures that the DNFs produce no output in the absence of external input *U*_{i,n}. If *U*_{i,n} ≡ 0 and *V*_{0}(*x*) < 0 for all *x* ∈ Ω, then ${V}_{i,n+1}={\alpha}^{n+1}{V}_{i,0}+\nu $, since λ_{i,n−k} = 0 for all *k* = 0, ⋯, *n* and for all 1 ≤ *i* ≤ *M*. Hence we will have $\underset{n\to \infty}{\text{lim}}{V}_{i,n}=\nu $, see section 3.5.1 below for an illustration.

### 3.4. Multiple-Layers DDNFs Model

Let 1 ≤ *k, m* ≤ *L*. Consider the DDNFs with *L* layers given above and let *x*_{i}, *x*_{p} on the *k*th layer.

Consider a fixed point solution ${V}_{i,\ast}^{(k)}$. Given a layer 1 ≤ *k* ≤ *L* and a neuron at position *x*_{i} on it, there are two sources of variability for the potential ${V}_{i,n+1}^{(k)}={\left[{f}_{k}({V}_{n}(X))\right]}_{i}$: firstly, the variability due to the *M*_{k} neurons on the *k*th layer connected to the neuron at position *x*_{i}, and secondly, the variability due to the neurons located on possibly two layers (*k* − 1) and (*k* + 1) connected to the *k*th layer. Therefore, let 1 ≤ *m* ≤ *L* and let *x*_{q} be a point on the *m*th layer. By the chain rule, we have

where the variability on the *k*th layer is

From Equation (7), we obtain the variability due to layers *k* − 1 and *k* + 1:

Let $M={\displaystyle \sum _{\ell =1}^{L}}{M}_{\ell}$. We recall that ${K}_{ij}^{(k,\ell )}:={W}_{ij}^{(k,\ell )}{G}^{\prime}({V}_{j,\ast}^{(\ell )})$. Now we consider a *M* × *M* super-adjacency matrix $A=JF({V}_{\ast}(X))={\left({A}^{(k,m)}\right)}_{1\le k,m\le L}$, where **A**^{(k,m)} is a *M*_{k} × *M*_{m} block matrix defined as ${A}^{(k,m)}=\left({a}_{ij}^{(k,m)}\right)$, where ${a}_{ij}^{(k,m)}={\Phi}_{i,j}^{(k,k)}\xb7{\Delta}_{i,j}^{(k,m)}$ with

and

Consequently, **A**^{(k,m)} **= 0** if *m* ∉ *L*_{k}, where **0** represents the zero matrix. Therefore, the super-adjacency matrix can be written as

Clearly, * A* is a block tridiagonal matrix that reduces to a block diagonal matrix if the pulse emission rate function

*G*is the Heaviside function and to single

*M*×

*M*diagonal matrix if

*L*= 1. There is a rich literature on the subject of tridiagonal matrices and the research is still on-going. There is no close form formula for the eigenvalues of

*A*since they are not obvious to calculate, except for some special cases.

#### 3.4.1. Special Case: Single Neuron Multiple-Layers DDNFs

It special case that is worth mentioning since all *L* layers have exactly one neuron and they are interconnected. In fact in this case, *M*_{k} = 1, β_{k} = |Ω| for 1 ≤ *k* ≤ *L*, and the super adjacency matrix reduces to an *L* × *L* tridiagonal matrix

where from Equations (11) and (12), one has

We observe that *K*^{(k,k)} is the weighted rate of change of the pulse emission function on the *k*th layer, and *K*^{(k,k+1)} can be regarded as the transitional weighted rate of change of the pulse emission function from layer *k* to layer *k* + 1. Likewise, *K*^{(k−1,k)} is the transitional weighted rate of change of the pulse emission function from layer *k* − 1 to layer *k*. From Molinari (2008), the eigenvalues of A are given by the equation

Suppose further that *a*_{1} = *a*_{2} = ⋯ = *a*_{L} = *a, b*_{1} = *b*_{2} = ⋯, *b*_{L−1} = *b*, and *c*_{1} = *c*_{2} = ⋯*c*_{L−1} = *c*. This means that the weighted rates of change of the pulse emission function ${K}^{(k,k)}={K}_{0}$ for 1 ≤ *k* ≤ *L*, ${K}^{(k-1,k)}={K}_{1}$ for 1 ≤ *k* ≤ *L* − 1, and ${K}^{(k,k+1)}={K}_{2}$ for 2 ≤ *k* ≤ *L*. From Kulkarni et al. (1999), the eigenvalues of *A* will given by

Now for 1 ≤ *k* ≤ *L*, put

We therefore have the following result.

**THEOREM 15**. *Suppose we are in the presence of a DDNFs with L layers with M* = 1 *point each, interconnected via a positive connectivity function W and a rate pulse emission rate function G whose rate of change is identical on all layers. Then*

*1. The fixed point V*

_{∗}

**(**

**X****)**

*is asymptotically stable if and only if*$\underset{1\le k\le L}{\text{max}}|{\mu}_{k}|<1$.

*2. The fixed point V*

_{∗}

**(**

**X****)**

*is unstable if and only if*$\underset{1\le k\le L}{\text{max}}|{\mu}_{k}|>1$.

*3. The fixed point V*

_{∗}

**(**

**X****)**

*is maybe stable or unstable if*$\underset{1\le k\le L}{\text{max}}|{\mu}_{k}|=1$.

PROOF. By hypothesis, *a* = α + (1 − α)|Ω|*K*_{0}, *b* = [α + (1 − α)|Ω|*K*_{0}]|Ω|*K*_{1}, and *c* = [α + (1 − α)|Ω|*K*_{0}]|Ω|*K*_{2}, for some *K*_{0}, *K*_{1}, *K*_{2} > 0. For 1 ≤ *k* ≤ *L*, the eigenvalues are given as

From Thereom 3, the proof is complete.

**REMARK 16**.

*1. It is worth observing that in case L* = 1 and *M* = 1, *then K*_{1} = *K*_{2} = 0 *and thus μ*_{1} = *α* + (1 − *α*)|Ω|*K*_{0}. *We see that |μ*_{1} < 1 *if* |Ω|*K*_{0} < 1 *yielding the condition obtained in the one-layer-model*.

*2. We also note that in fact* $\underset{1\le k\le L}{\text{max}}|{\mu}_{k}|=\left[\alpha +(1-\alpha )|\Omega |{K}_{0}\right]\left[1+2|\Omega |\sqrt{{K}_{1}{K}_{2}}\right]$ *and therefore the stability condition* $\left[\alpha +(1-\alpha )|\Omega |{K}_{0}\right]\left[1+2|\Omega |\sqrt{{K}_{1}{K}_{2}}\right]<1$ *links all the parameters of system*.

*3. This special cases is also important in that we can imagine a large number of neurons supposedly placed on individual layers and interconnected in a forward or backward manner only and each receiving external input. In case the first and last neurons are connected so that the system forms a weighted closed graph, like in Figure 4, where the thickness represents the weights*.

*Then the super-adjacency matrix given as*

*According to Molinari (2008), the eigenvalues are given by*

#### 3.4.2. Special Case: Single Neuron Two-Layer DDNFs

We now discuss a single neuron two-layer DDNFs. In this case, we still have *M*_{k} = 1 for *k* = 1, 2 and *L* = 2. It follows that the super-adjacency matrix is given as $A=\left(\begin{array}{cc}{a}_{1}& {b}_{1}\\ {c}_{1}& {a}_{2}\end{array}\right)$ where from Equations (11) and (12), we have

Let

We have the following stability result.

**THEOREM 17**. *Consider a two-layer DDNFs with one neuron each. Then the nontrivial solution V*

_{∗}

**(**

**X****)**

*is asymptotically stable if and if the following conditions are satisfied:*

*1*. 0 < θ < 1,

*2*. μ < 1,

*3*. ${(\mu \theta +2-{\mu}_{0})}^{2}<4\theta (\mu \theta +1-{\mu}_{0})$.

PROOF. By the determinant-trace analysis, see for instance Elaydi (2007), it is enough to prove that the above conditions are equivalent to

The determinant of the super-adjacency matrix is

Also

Therefore since θ < 1 and μ < 1, we have that det(*A*) < 1. Let $P(\alpha ):=\mathrm{\text{det}}(A)-\mathrm{\text{tr}}(A)+1=\theta {\alpha}^{2}+(\mu \theta +2-{\mu}_{0})\alpha +\mu \theta +1-{\mu}_{0}$. The discriminant of *P*(α) is $\Delta ={(\mu \theta +2-{\mu}_{0})}^{2}-4\theta (\mu \theta +1-\mu )$. Therefore *P*(α) is positive if Δ < 0 with θ > 0, that is, ${(\mu \theta +2-{\mu}_{0})}^{2}-4\theta (\mu \theta +1-\mu )$ with θ > 0.

**REMARK 18**.

*1. We note that the inequality θ < 1 must be strict, otherwise the third condition in the theorem would be false. To see why, observe that θ* = 1 ⇒ *L* = 1 *thus K*^{(2,2)} = 0. *Hence* ${\mu}_{0}=|\Omega |{K}^{(1,1)}$ *and* μ = α*K*^{(1,1)}. *Therefore*, ${(\mu \theta +2-{\mu}_{0})}^{2}<4\theta (\mu \theta +1-{\mu}_{0})\iff {(\alpha {K}^{(1,1)}+2-|\Omega |{K}^{(1,1)})}^{2}<4(\alpha {K}^{(1,1)}+1-|\Omega |{K}^{(1,1)})$, *that is*, ${\left((\alpha -|\Omega |){K}^{(1,1)}+2\right)}^{2}<4(\alpha -|\Omega |){K}^{(1,1)}+1)$ *which is impossible since the latter would be equivalent to saying* ((α − |Ω|)*K*^{(1,1)})^{2} < 0.

*2. We observe that the condition* 0 < θ < 1 *requires* |Ω|^{2}*K*^{(1,2)}*K*^{(2,1)} < 1. *What is striking about it is that is achieved if* $max\left\{{K}^{(1,2)},{K}^{(2,1)}\right\}<\frac{1}{|\Omega |}$, *similarly to Remark 14 above*.

### 3.5. Simulations

There are multiple parameters that affect the stability of a fixed point solution to the DDNFs: the choice of the kernel function *W*(*x, y*), the pulse emission rate function *G*, parameters such the size of the field |Ω|, the time scale function ϕ(*h*), and the number of points per layer *M*, and the length of time *N*. In these simulations, we will consider the Ω = [−20, 20], *h* = 0.8, *N* = 100, *M* = 200, and the $W(x,y)={\sigma}^{+}{e}^{-0.5{(x-y)}^{2}/{\sigma}_{1}^{2}}-{\sigma}^{-}{e}^{-0.5{(x-y)}^{2}/{\sigma}_{2}^{2}}$, with ${\sigma}^{+}=4,{\sigma}^{-}=1.5,{\sigma}_{1}=1$, and σ_{2} = 4.5. We select a resting phase ν = −0.5. To initialize our dynamical system, we will choose *V*_{i, 0} = −1.5 for *i* = 1, 2, ⋯, *M*. In Figures 5–8, below, (a) is the three dimension representation of *V*_{n}(*x*): = *V*(*x, n*), (b) represents *V*_{n}(*x*) as a function of *n* for a given *x* value, (c) represents *V*_{n}(*x*) as function of *x* for a given *n* value, and (*d*) represents the cobweb diagram for a given *x* value, namely a plot of *V*_{n+1}(*x*) vs. *V*_{n}(*x*) for which the green dot represents the starting point *V*_{0}(*x*) = −1.5 and the red arrows connecting the points with coordinates [*V*_{n}(*x*), *V*_{n}(*x*)] and [*V*_{n}(*x*), *V*_{n+1}(*x*)] to show the evolution of the dynamical system overtime. Convergence occurs when the arrows stop evolving, and oscillations occurs when they circle around indefinitely. In the multi-layer case, we let *L* = 250.

**Figure 5**. Here *U*_{i,n} = 0 and *G*(*v*) = *G*_{1}(*v*). **(A–D)** We observe from all graphs that *V*_{n}(*x*) quickly settles to the resting level ν = −0.5 confirming the theoretical results in section 3.3.2.

#### 3.5.1. Single-Layer With Heaviside Function and No External Input

In this simulation, we consider single-layer DDNFs with *U*_{i,n} = 0 with a Heaviside pulse emission function, see Figure 5 below.

#### 3.5.2. Single-Layer With Sigmoid Function and No External Input

In this simulation, we consider single-layer DDNFs with *U*_{i,n} = 0 but with a sigmoid pulse emission function, see Figure 6 below.

**Figure 6**. Here *U*_{i,n} = 0 and *G*(*v*) = *G*_{2}(*v*). From **(A)**, we observe that system alternating between inhibitory and excitatory phases. **(B,D)** Show that for fixed *x* = 9.548, the system converges overtime. In **(C)**, we clearly see the oscillatory regime of the system in the space domain for fixed *n* = 98.

#### 3.5.3. Single-Layer With Heaviside Function With Gaussian External Input

In this simulation, we consider single-layer DDNFs with a Heaviside Pulse emission function with a Gaussian external input ${U}_{i,n}={(2\pi )}^{-1}{e}^{\frac{1}{2}{x}_{i}^{2}}$, see Figure 7 below.

**Figure 7**. Here ${U}_{i,n}={(2\pi )}^{-1/2}{e}^{-\frac{1}{2}\xb7{x}_{i}^{2}}$ and *G*(*v*) = *G*_{1}(*v*). From **(A)**, we observe that system has an excitatory phase around 0 and an inhibitory phase farther away. **(B,D)** Show that for fixed *x* = 2.111, the system converges overtime. In **(C)**, we clearly observe a high level excitation occurring around 0, and the system settling back to its resting phase farther away from 0 for fixed *n* = 38.

#### 3.5.4. Single-Layer With Sigmoid Function With Gaussian External Input

In this simulation, we consider single-layer DDNFs with a Sigmoid Pulse emission function with a Gaussian external input ${U}_{i,n}={(2\pi )}^{-1}{e}^{\frac{1}{2}{x}_{i}^{2}}$, see Figure 8 below.

**Figure 8**. Here ${U}_{i,n}={(2\pi )}^{-1/2}{e}^{-\frac{{x}_{i}^{2}}{2}}$ and *G*(*v*) = *G*_{2}(*v*). This case is similar to the second case above. From **(A)**, we observe that system also alternates between inhibitory and excitatory phases. **(B,D)** Show that for fixed *x* = −18.79, the system converges overtime. In **(C)**, we clearly the oscillatory regime of the system in the space domain for fixed *n* = 86. The system in this case is more frequently inhibitory than excitatory.

#### 3.5.5. Multiple-Layer With Sigmoid Function With No External Input

In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with no external input, *U*_{i,n} = 0, see Figure 9 below.

**Figure 9**. Here *U*_{i,n} = 0 and *G*(*v*) = *G*_{2}(*v*) with *L* = 200 and *M* = 1 neuron per layer. From **(A)**, we observe that system quickly excitatory and settles into constant state overtime. **(B,D)** Show that for fixed *k* = 293, the system converges overtime. In **(C)**, we clearly see the constant state for *n* = 76.

#### 3.5.6. Multiple-Layer With Sigmoid Function With Gaussian External Input

In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with a Gaussian external input, *U*_{i,n} = 0, see Figure 10 below.

**Figure 10**. Here ${U}_{i,n}={(2\pi )}^{-1/2}{e}^{-\frac{{x}_{i}^{2}}{2}}$and *G*(*v*) = *G*_{2}(*v*) with *L* = 200 and *M* = 1 neuron per layer. From **(A)**, we observe that system is quickly excitatory and settles into constant “Gaussian” state overtime. **(B,D)** Show that for fixed *k* = 114, the system converges overtime. In **(C)**, we clearly see the “Gaussian” state for *n* = 45 with peaks located at around *L* = 250.

#### 3.5.7. Effect of α

In this simulation, we consider *V*_{N}(*x*, α) where *N* = 100 for different values of α. Since the convergence to the fixed solution is quick, we expect *V*_{N}(*x*, α) to be independent of α at *n* = *N* = 100, in that any cross-section of the three dimensional plot *V*_{N}(*x*, α) should be the same. This is shown in Figure 11 below for a *G*(*v*) = *G*_{1}(*v*) and ${U}_{i,n}={(2\pi )}^{-1/2}{e}^{-\frac{{x}_{i}^{2}}{2}}$.

**Figure 11**. Here ${U}_{i,n}={(2\pi )}^{-1/2}{e}^{-\frac{{x}_{i}^{2}}{2}}$ and *G*(*v*) = *G*_{2}(*v*) with *L* = 1 and *M* = 200 neuron per layer. We see that the cross-sections are as in Figure 7 above and the “flaps” at the end represent the case where α = 1, which amounts to the constant case.

## 4. Discussion

In this paper, we have proposed a discrete model for dynamical neural fields. We have proposed another proof of the existence of nontrivial solutions for dynamic neural fields. We have discussed its stability of nontrivial solutions of dynamic neural field for some particular cases. The simulations we propose capture very well the notion of stability on dynamic neural field. Below are some observations drawn from this analysis.

1. We observe that the cases of no external input *U*_{i,n} with one layer and multiple neurons and multiple layers with one neuron each are very similar, albeit the shape of the potential function ${V}_{i,n}^{(k)}$. In fact, this is not surprising since multiple layers with one neurons each amount to one layer with neurons connected like in a graph.

2. An important observation is that of the stability conditions depend on the knowledge of a nontrivial solution *V*_{∗} for the DDNFs. Two things to note about **V_{∗}(X)** are:

(a) We can not obtain **V_{∗}(X)** in a close form. We may however rely on estimates like the one proposed in Kwessi (2021). Even in the latter case, using an estimate may proved delicate because of limited accuracy.

(b) If one cannot use a true value for **V_{∗}(X)**, one may use the properties of the pulse emission functions such as the Heaviside or sigmoid functions that are in most cases in applications bounded.

3. We note the size of the domain Ω is an important parameter for the shape of ${V}_{i,n}^{(k)}$.

4. As for the types of kernels, we note that Gaussian and Laplacian kernel tends to produce similar results that are sometimes very different from those produced by hyperbolic tangent kernels.

5. We note that the choice of α = *e*^{−h} is similar to an Euler forward discretization with ${e}^{-h}=\frac{dt}{\tau}$ where *dt* ≪ τ. This choice is made for simplification purposes and also out of an abundance of caution due to the fact for certain one-dimension systems, the Euler forward discretization have been shown not to always preserve the true dynamics from the ordinary differential equation, see for instance Kwessi et al. (2018).

6. Another important note is that the parameter α = 1 was not found to be a bifurcation parameter, thus the bifurcation diagram will not be given for sake of simplicity.

7. We acknowledge that in the two special cases discussed for multiple-layers DDNFs, so far, at least theoretically, the results apply only to positive connectivity functions *W*(*x, y*) because we rely on results previously established in the current literature. Simulations however suggest that they may also extend to negative connectivity functions and it would a worthwhile exercise to see how to establish these results theoretically.

## Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

## Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

## Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

## References

Amari, S.-I. (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. *Biol. Cybern.*, 27, 77–87. doi: 10.1007/BF00337259

Beim, P. G., and Hutt, A. (2014). Attractor and saddle node dynamics in heterogeneous neural fields. *EPJ Nonlin. Biomed. Phys. EDP Sci.* 2:2014. doi: 10.1140/epjnbp17

Beurle, R. L. (1956). Properties of a mass of cells capable of regenerating pulses. *Philos. Trans. R. Soc. Lond. B* 240. 55–94.

Bicho, E., Louro, L., and Erlhangen, W. (2010). Integrating verbal and non-verbal communication in adynamic neural field for human-robot interaction. *Front. Neurorobot.* 4:5. doi: 10.3389/fnbot.2010.00005

Bicho, E., Mallet, P., and Schöner, G. (2000). Target representation on an autonomous vehicle with low-levelsensors. *Int. J. Robot. Res.* 19, 424–447. doi: 10.1177/02783640022066950

Camperi, M., and Wang, X.-J. (1998). A model of visuospatial short-term memory in prefrontal cortex: recurrent network and cellular bistability. *J. Comput. Neurosci.* 4, 383–405. doi: 10.1023/A:1008837311948

De Domenico, M., Solé-Ribalta, A., Cozzo, E., Kivelä, M., Moreno, Y., Porter, M. A., et al. (2013). Mathematical formulation of multilayer networks. *Phys. Rev. X* 3:041022. doi: 10.1103/PhysRevX.3.041022

Durstewitz, D. (2017). *Advanced Data Analysis in Neuroscience*. Bernstein Series in Computational Neuroscience. Cham: Springer.

Durstewitz, D., Seamans, J. K., and Sejnowski, T. J. (2000). Neurocomputational models of working memory. *Nat. Neurosci.* 3(Suppl.), 1184–1191. doi: 10.1038/81460

Erlhagen, W., and Bicho, E. (2006). The dynamics neural field approach to cognitive robotics. *J. Neural Eng.* 3, R36–R54. doi: 10.1088/1741-2560/3/3/R02

Erlhagen, W., and Schöner, G. (2001). Dynamic field theory of movement preparation. *Psychol. Rev.* 109, 545–572. doi: 10.1037/0033-295x.109.3.545

Ermentrout, G. B., and Cowan, J. D. (1979). A mathematical theory of visual hallucination patterns. *Biol. Cybern.* 34, 137–150. doi: 10.1007/BF00336965

Hammerstein, A. (1930). Nichtlineare integralgleichungen nebst anwendungen. *Acta Math.* 54, 117–176. doi: 10.1007/BF02547519

Jin, D., Qin, Z., Yang, M., and Chen, P. (2021). A novel neural modelwith lateral interactionfor learning tasks. *Neural Comput.* 33, 528–551. doi: 10.1162/neco_a_01345

Kulkarni, D., Scmidt, D., and Tsui, S. K. (1999). Eigen values of tridiagonal pseudo-toeplitz matrices. *Linear Algeb. Appl.* 297, 63–80.

Kwessi, E. (2021). A consistent estimator of nontrivial stationary solutions of dynamic neural fields. *Stats* 4, 122–137. doi: 10.3390/stats4010010

Kwessi, E., and Edwards, L. (2020). “Artificial neural networks with a signed-rank objective function and applications,” in *Communication in Statistics–Simulation and Computation*. 1–26. doi: 10.1080/03610918.2020.1714659

Kwessi, E., Elaydi, S., Dennis, B., and Livadiotis, G. (2018). Nearly exact discretization of single species population models. *Nat. Resour. Model.* 31:e12167. doi: 10.1111/nrm.12167

Lasota, A., and Mackey, M. C. (2004). *Chaos, Fractals, and Noise. Applied Mathematical Sciences*. New York, NY: Springer-Verlag.

Lin, H.-T., and Lin, C.-J. (2003). A study on sigmoid kernels for svm and the training of non-psd kernels by smotype methods. *Neural Comput.* 13, 2119–2147.

Lin, K. K., Shea-Brown, E., and Young, L.-S. (2009). Spike-time reliability of layered neural oscillator networks. *J. Comput. Neurosci.* 27, 135–160. doi: 10.1007/s10827-008-0133-3

Molinari, L. G. (2008). Determinant of block tridiagonal matrices. *Linear Algeb. Appl.* 429, 2221–2226. doi: 10.1016/j.laa.2008.06.015

Neumann, K., and Steil, J. J. (2011). “Batch intrinsic plasticity for extreme learning machines,” in *International Conference on Artificial Neural Networks* (Berlin: Springer), 339–346.

Nunez, P. L., and Srinivasan, R. (2006). *Electric Fields of the Brain: The Neurophysics of EEG, 2nd Edn*. Oxford University Press. doi: 10.1093/acprof:oso/9780195050387.001.0001

Perone, A., and Simmering, V. R. (2019). Connecting the dots: finding continuity across visuospatial tasks and development. *Front. Psychol.* 2019:1685. doi: 10.3389/fpsyg.2019.01685

Pozo, K., and Goda, Y. (2010). Unraveling mechanisms of homeostatic synaptic plasticity. *Neuron* 66, 337–351. doi: 10.1016/j.neuron.2010.04.028

Quinton, J. C., and Goffart, L. (2017). A unified dynamic neural field model of goal directed eye-movements. *Connect. Sci.* 30, 20–52. doi: 10.1080/09540091.2017.1351421

Schmidt, B. K., Vogel, E. K., Woodman, G. F., and Luck, S. J. (2002). Voluntary and automatic attentional control of visual working memory. *Percept. Psychophys.* 64, 754–763. doi: 10.3758/bf03194742

Simmering, A. R., Schutte, V. R., and Spencer, J. P. (2008). Generalizing the dynamic field theory of spatial cognition across real and developmental time scales. *Brain Res.* 1202, 68–86. doi: 10.1016/j.brainres.2007.06.081

Strub, C., Schöner, G., Wörgötter, F., and Sandamirskaya, Y. (2017). Dynamic neural fields with intrinsic plasticity. *Front. Comput. Neurosci.* 11:74. doi: 10.3389/fncom.2017.00074

Sussilo, D., and Barack, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. *Neural Comput.* 25, 626–649. doi: 10.1162/NECO_a_00409

Tass, P. (1995). Cortical pattern formation during visual hallucinations. *J. Biol. Phys.* 21, 177–210.

Wijeakumar, S., Ambrose, J. P., Spencer, J. P., and Curtu, R. (2017). Model-based functional neuroimaging using dynamic neural fields: an integrative cognitive neuroscience approach. *J. Math. Psychol.* 76(Pt B), 212–235. doi: 10.1016/j.jmp.2016.11.002

Williams, R. J., and Zipser, D. (1990). A learning algorithm for continually running fully recurrent neural networks. *Neural Comput.* 1, 256–263.

Wilson, H. R., and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations ofmodel neurons. *Biophys. J.* 12, 1–24. doi: 10.1016/S0006-3495(72)86068-5

Keywords: dynamic neural fields, discrete, stability, simulations, neurons

Citation: Kwessi E (2021) Discrete Dynamics of Dynamic Neural Fields. *Front. Comput. Neurosci.* 15:699658. doi: 10.3389/fncom.2021.699658

Received: 23 April 2021; Accepted: 02 June 2021;

Published: 08 July 2021.

Edited by:

Feng Shi, Amazon, United StatesReviewed by:

Jeremy Fix, CentraleSupélec, FranceFlora Ferreira, School of Sciences, University of Minho, Portugal

Copyright © 2021 Kwessi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eddy Kwessi, ekwessi@trinity.edu