Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 17 November 2022
Volume 16 - 2022 | https://doi.org/10.3389/fncom.2022.1029235

Outer-synchronization criterions for asymmetric recurrent time-varying neural networks described by differential-algebraic system via data-sampling principles

Ping Li1,2 Qing Liu2* Zhibing Liu2
  • 1College of Commercial, Pingxiang University, Pingxiang, China
  • 2College of Mathematics and Statistics, Huanggang Normal University, Huanggang, China

Asymmetric recurrent time-varying neural networks (ARTNNs) can enable realistic brain-like models to help scholars explore the mechanisms of the human brain and thus realize the applications of artificial intelligence, whose dynamical behaviors such as synchronization has attracted extensive research interest due to its superior applicability and flexibility. In this paper, we examined the outer-synchronization of ARTNNs, which are described by the differential-algebraic system (DAS). By designing appropriate centralized and decentralized data-sampling approaches which fully account for information gathering at the times tk and tki. Using the characteristics of integral inequalities and the theory of differential equations, several novel suitable outer-synchronization conditions were established. Those conditions facilitate the analysis and applications of dynamical behaviors of ARTNNs. The superiority of the theoretical results was then demonstrated by using a numerical example.

1. Introduction

A novel approach to artificial general intelligence (Yang et al., 2021a,b, 2022a,b,c) are critical studies in the field of brain-inspired intelligence to realize high-level intelligence, high accuracy, high robustness, and low power consumption in comparison with state-of-the-art artificial intelligence works. The research of neural networks can promote or accelerate the development of artificial intelligence. Due to their dynamic complexity and a vast range of civil and military applications, neural networks (NNs) garner a great deal of interest, such as associative memory, classification, identification, and optimized calculations (Hu and Hu, 2019; Zhang et al., 2021a; Lv et al., 2022). With the widespread application of NNs and the expansion of study, numerous varieties of NNs have been proposed by researchers. Consider conventional NNs (CNNs), feedforward NNs, and recurrent NNs (RNNs), for instance. RNNs are mostly employed to simulate machine learning (Cho et al., 2014; Shi et al., 2017) and language processing (Mao et al., 2014; Yin et al., 2017). Combining differential equations with RNNs yields asymmetric RNNs (ARNNs) (Chang et al., 2019). Currently, relatively few studies have been conducted on ARNNs. Ansari (2022) suggested an ARNN with a single layer for solving linear equations. This research also provides a straightforward method for setting various connection weights. Lu et al. (2016) explored outer-synchronization of ARNNs via a data-sampling control mechanism. For more related works on the network, refer to Liu et al. (2019, 2020, 2021), Lv et al. (2022a,b), and Zhang et al. (2020, 2021b).

The differential-algebraic system (DASs) are a broader scope of modeling systems, sometimes known as singular systems or restricted systems. DASs consist of differential and algebraic equations (AEs), the latter of which illustrates the limitations of systems. General DASs can be used to model power systems, while stochastic DASs are used to simulate just minor changes in transmission line parameters and system loads (Federico and Zárate-Miñano, 2013). This category includes aircraft flight trajectory tracking, optimization control systems, and crafting processes. DASs have superior modeling and simulation resulting in physics and engineering compared to differential dynamical systems. Consequently, research on the applications and theory of DASs is proliferating. A study of the Lyapunov stability of equilibria in DASs is presented (Bill and Mareels, 1990). Constantinos (1988) determined whether a requirement for additional restrictions satisfied by initial values can be derived from the differentiation of a subset of nonlinear DAS equations. In Esposito and Floudas (2000), two global optimization techniques for DAS parameter estimation were suggested. Non-regular linear DAS state estimation and dynamic feedback stabilization have been investigated (Berger and Reis, 2017).

Synchronization is the dynamic behavior of complex system interactions. It refers to changes in the rhythm of a self-sustained periodic oscillator as a result of weak interactions. There are various types of synchronization, such as quasi, complete, identical, finite-time, and generalized. There is no loss of commonality among the various definitions of quai-synchronization, which is when a system can start from any beginning value and eventually converge to an error bound with t. Any two system solutions that are in identical synchronization will converge to zero with time t. The system reaches finite-time synchronization reflecting the limit of system error is equal to zero in a finite time. Due to their varying control methodologies, various systems exhibit various synchronization styles. Synchronization, in general, is a steady state of equilibrium inside a system or between the master-slave system. For NNs, multi-agents, and DASs, synchronization as a key research hot topic yielded numerous useful outcomes. Using the features of Mittag-Leffler functions and stochastic matrices, Liu et al. (2018) derived two sets of necessary requirements for the global synchronization of a connected fractional-order system. Wu et al. (2019) suggested a discrete-time-based periodic intermittent observation control to investigate the synchronization of stochastic NNs. The fix-time synchronization problem of discontinuous fuzzy inertial NNs under uncertainty parameters is studied by constructing a new type of discontinuous control input and applying the Lyapunov-Krasovskii functional technique (Kong et al., 2021). Chen et al. (2021) coupled aperiodic intermittent control with event-triggered control, investigated the quasi-synchronization problem of CDMNN, and derived an effective criterion.

Sampled-data control is discrete rather than continuous-time when delivered across a network (Chen and Han, 2013). By building a proper control mechanism, sampling data control utilizing only partial data reduces communication costs significantly. The primary sampling control techniques for the corresponding system primarily rely on the discrete-time method, the impulsive system method, and the input delay method. The selection of the sampling interval is a crucial component of sampling control. How to use discrete sampling data control to accomplish the goals of the control system is the key problem of the research on the condition that the sampling interval is as large as possible. In recent years, the control problem of sampled data system has attracted interest of scholars. The event-triggered strategy control based on sampled data, for example, has the consensus of researchers (Su et al., 2017). The sampling interval setting is crucial to the control mechanism (Syed Ali et al., 2019). How to build the most efficient and cost-effective sampling-based control system is thus a topic worthy of investigation. For example, Liu et al. (2017) offered a stored sampled data control strategy with constant signal propagation delay to solve the stabilization problem of T-S fuzzy systems by developing a Lyapunov functional.

Based on the above analysis, this research uses centralized and decentralized data-sampling principles to explore the conditions for the ARTNN to achieve outer-synchronization. Utilizing efficient procedures to maximize access to information with limited resources is an improvement. To develop a reasonable solution, we thoroughly evaluated the system's structural characteristics and state variables in accordance with the centralization and decentralization principles and selected the optimal sampling interval. The practical conditions for the outer-synchronization of the system are determined from the characteristics of the DAS. The methodology in this paper has promoted DAS research.

The remaining sections of the work are structured as stated below. Section 2 offers the introduction and problem statement. Section 3 lists principal results and evidence. Section 4 illustrates the simulations. Section 5 concludes the article.

2. Preliminaries

In this section, a DANN model is established. Some basic definitions and one useful lemma are presented.

Enlightened by the frame work of Esposito and Floudas (2000) and Berger and Reis (2017), the singular ARTNN is expressed as follows:

Edxi(t)dt=-Cxi(t)+Aj=1nfj(xj(t))+Ji(t),    (1)

where x(t)=(x1(t),,xn(t))Rn is the neuron state vector. E is a singular constant matrix, and we suppose that 0 < rank(E) = r < n. C is the state coefficient matrix with respect to time t and A is the connection weight matrix between neurons. C, ARn×n are regular. Ji(t) is the external input.

Model (1) is a singular NN with implicit constraints, how to express constraints explicitly and explore the properties of such systems is a challenging task.

We assume that C=(cr(t)00cn-r(t)), A=(ar,s(t)ar,n-s(t)an-r,s(t)an-r,n-s(t)) and E=(Ir000). Ir is a r × r identity matrix.

Then, the model (1) is equivalent to the following system

Irdxi(1)(t)dt=-cr(t)xi(1)(t)+ar,s(t)fj(1)(xj(1)(t))                  +ar,n-s(t)fj(2)(xj(2)(t))+Ji(1)(t),    (2)
0=-cn-r(t)xi(2)(t)+an-r,s(t)fj(1)(xj(1)(t))          +an-r,n-s(t)fj(2)(xj(2)(t))+Ji(2)(t).    (3)

where xi(t)=(xi(1)(t),xi(2)(t)))T with xi(1)(t)Rr,xi(2)(t)Rn-r, fj(xj(t))=(fj(1)(xj(1)(t)),fj(2)(xj(2)(t)))T with fj(1)(xj(1)(t))Rr,fj(2)(xj(2)(t))Rn-r, and Ji(t)=(Ji(1)(t),Ji(2)(t))T.

Models (2) and (3) are equivalent to model (1) with explicit constraints when viewed as a whole. Evidently, model (2) is still a differential system, and a class of AEs constrains model (3). AEs are also nonlinear terms. Thus, DAARTNN is created by merging models (2) and (3).

Remark 2.1 Obviously, when rank(E) = 0, the singular ARTNN is an ordinary ARTNN. Here, we assume that 0 < rank(E) = r < n is held only to show that the methods and conclusions are also applicable to general NN models.

The coefficient matrices of models 2 and 3 are of different dimensions, but in this paper, we assume that both the DEs and the AEs are in the same dimension. Therefore, with a combined model (2) and (3), we can obtain DAARTNN as follows

{dxi(t)dt=ci(t)xi(t)+j=1naij(t)fj(xj(t))+j=1nbij(t)gj(yj(t))+Ji0=di(t)yi(t)+j=1npij(t)hj(xj(t))+j=1nqij(t)kj(yj(t))+Ii    (4)

where ci(t), di(t), aij(t), bij(t), pij(t), and qij(t) are piece-wised continuous and bounded. fj(*), gj(*), hj(*), and kj(*) satisfy

0fj(u)-fj(v)u-vFj,0gj(u)-gj(v)u-vGj0hj(u)-hj(v)u-vHj,0kj(u)-kj(v)u-vKj.    (5)

for all xy, where Fj > 0, Gj > 0, Hj > 0, and Kj > 0 are all constants and j = 1, …, n.

Under the strategy of centralized data sampling, the continuous time variable t is replaced by a set of discrete sampling time variable tk. After that, the model (4) can be rewritten as

{dxi(t)dt=ci(t)xi(tk)+j=1naij(t)fj(xj(tk))+j=1nbij(t)gj(yj(tk))+Ji0=di(t)yi(tk)+j=1npij(t)hj(xj(tk))+j=1nqij(t)kj(yj(tk))+Ii    (6)

for j = 1, …, n. {tk}k=0+ is an increasing time sequence. At each time point tk, all neurons broadcast their state to out-neighbors to receive the state information sent by in-neighbors.

Similarly, under the strategy of decentralized data-sampling, the system (1) can be rewritten as follows:

{dxi(t)dt=ci(t)xi(tki)+j=1naij(t)fj(xj(tkj))+j=1nbij(t)gj(yj(tkj))+Ji0=di(t)yi(tki)+j=1npij(t)hj(xj(tkj))+j=1nqij(t)kj(yj(tkj))+Ii    (7)

for j = 1, …, n. The increasing time sequence {tki}k=0+ ordered as 0=t0i<t1i<<tki< is uniform for all the neuron i{1,,n}. Each neuron broadcasts its state to its out-neighbors and receives the state information sent by in-neighbors at time tki.

Remark 2.2 For purposes of sampling-data control, the system only updates its information at periods tk and tki. The distinction between tk and tki is that tk is centralized whereas tki is decentralized. There is a variance in the time at which the information is updated. In the centralized style, information is transferred at a specific time point t1, t2, t3, …, tk, …, but in the decentralized style, information is transferred at a specific time point t11,t12,,t1n,t21,t22,,t2n,,tk1,tk2,.

To begin the discussion, we give the following definitions and lemmas.

Definition 1. There exist a positive constant ςi, (i = 1, …, n), the l1 norm is defined as

||x||1,ξ=j=1nςi|xi|.

Definition 2. Consider any two trajectories (x(t), y(t)), (u(t), v(t)) of system (4) which starts from different initial values (x(0), y(0)) and (u(0), v(0)). The system (4) is said to achieve outer-synchronization if

limt+||x(t)-u(t)||=0,limt+||y(t)-v(t)||=0,

where ||·|| is the norm of state.

We aimed to transform DAS to the form of regular differential equations by differential operations since they are a combination of differential equations and AEs. The index of the DAS is the quantity of differentials employed in this procedure. For instance, a differential equation is index-0.

Lemma 1. The DAARTNN is said to be index-1, if and only if

-di(t)+j=1npij(t)hj(xj(t))+j=1nqij(xj(t))kj(yj(t))>0

Based the research, we set

μj(ξ,t)=cj(t)-Fjajj+(t)-Fjijςiςj|aij(t)|,vj(ξ,t)=Gjbjj+(t)+Gjijςiςj|bij(t)|,σj(t)=pjj+(t)Hj+Hjij|pij(t)|dj(t)-qjj+(t)-Kjij|qij(t)|,M1=maxijnsuptt0{cj(t)+Fjajj+(t)+Fjjiςiςj|aij(t)|},M2=maxijnsuptt0{Gjbjj+(t)+Gjijςiςj|bij(t)|},

where aii+(t)=max{aii(t),0}, bii+(t)=max{bii(t),0}, aii-(t)=min{aii(t),0}, and bii-(t)=min{bii(t),0}.

Because vj(ξ, t), δj(t) are bounded, this means that there is a constant δ satisfying the following conditions

supt[0,+)σj(t)δ,sup{μj(ξ,s)-δvj(ξ,s)}N.

3. Main results

This section shows how to build the controls of the system using the settings from the previous section and the centralized and decentralized sampling of data principles.

3.1. Structure-dependent centralized and decentralized data sampling

Denote w(t)=[w1(t),,wn(t)]T and z(t)=[z1(t),,zn(t)]T with wi(t) = xi(t) − ui(t), zi(t) = yi(t) − vi(t), and f̄i(t)=fi(xi(t))-fi(ui(t)),i(t)=gi(yi(t))-gi(vi(t)),h̄i(t)=hi(xi(t))-hi(ui(t)),k̄i(t)=ki(yi(t))-ki(vi(t)). Then it holds

{dwi(t)dt=ci(t)wi(tk)+j=1naij(t)f¯j(tk)+j=1nbij(t)g¯j(tk)0=di(t)zi(tk)+j=1npij(t)h¯j(tk)+j=1nqij(t)k¯j(tk),    (8)

for all t ∈ [tk, tk+1), i = 1, …, n and k = 0, 1, 2, ….

The following theorem gives conditions that guarantee the system (4) reaches outer-synchronization via l1−norm.

Theorem 1. Assume that εa ∈ (0, 1) and ε0 > 0 with a ≤ ε0(2 − εa). Suppose μj(ξ, s) − δvj(ξ, s) ≥ ε0. Set an increasing time-point sequence {tk} as

tk+1=supτtk{τ:minj=1,,ntkt[μj(ξ,s)             δvj(ξ,s)]dsεa,t(tk,τ]}.    (9)

Proof. Since μj(ξ, s) − δvj(ξ, s) ≥ ε0 and the positive upper bound of it, we have

ε0(t-tk)tkt[μj(ξ,s)-δvj(ξ,s)]dsN(t-tk),    (10)

where j = 1, 2, …, n and t ∈ [tk, tk+1]. Based on data-sampling principles (9), the state will not be sampled until the following equation holds:

tkt[μj(ξ,s)-δvj(ξ,s)]ds=εa,    (11)

when t = tk+1, from (10) and (11),

ε0(tk+1-tk)εaN(tk+1-tk),

then

εaNtk+1-tkεaε0,    (12)

so,

tktk+1[μj(ξ,s)-δvj(ξ,s)]dsN(tk+1-tk)Nεaε0,    (13)

combined (12) and (13), we have

εatktk+1[μj(ξ,s)-δvj(ξ,s)]ds2-εa.    (14)

Consider wi(t)(i = 1, …, n) for each t ∈ [tk, tk+1], we have

      i=1nςi|wi(t)|=i=1nςi|wi(tk)+tkti(s)ds|=i=1n|ςiwi(tk)-tkt[ci(s)-aii(s)mi(s)]dsςiwi(tk)      +tkt[bii(s)ni(s)]dsςizi(tk)+jitkt[aij(s)ςiςjmj(tk)]ds      ςjwj(tk)+jitkt[bij(s)ςiςjnj(tk)]dsςjzj(tk)|,    (15)

with

mj(t)={f¯j(t)wj(t),wj(t)00,wj(t)=0    (16)
nj(t)={g¯j(t)zj(t),zj(t)00,zj(t)=0    (17)

which implies 0 ≤ mj(t) ≤ Fj, 0 ≤ nj(t) ≤ Gj for all j = 1, …, n and k = 0, 1, …, from above, note

(aii(s))-Fiaii(s)mi(t)(aii(s))+Fi(bii(s))-Gibii(s)ni(t)(bii(s))+Gi.

Then, it follows

i=1nςi|wi(t)|j=1n{|1-tkt[cj(s)-ajj+(s)Fj]ds+jiςiςjtkt[|aij(s)|Fj]ds|}ςj|wj(tk)|+j=1n{|tkt[bjj+(s)Gj]ds+jiςiςjtkt[|bij(s)|Gj]ds|}ςj|zj(tk)|.    (18)

For static equation

di(t)zi(tk)=pii(t)h̄i(tk)+qii(t)k̄i(tk)                +ji(pij(t)h̄j(tk)+qij(t)k̄j(tk))dj(t)zj(tk)=pjj(t)rj(tk)wj(tk)+qjj(t)sj(tk)zj(tk)                +ji(pij(t)rj(tk)wj(tk)+qij(t)sj(tk)zj(tk)),    (19)

with

rj(t)={h¯j(t)wj(t),wj(t)00,wj(t)=0    (20)
sj(t)={k¯j(t)zj(t),zj(t)00,zj(t)=0.    (21)

From above, we have

[dj(t)qjj(t)sj(tk)jiqij(t)sj(tk)]zj(tk)                                         =[pjj(t)rj(tk)+jipij(t)rj(tk)]wj(tk),    (22)

we can obtain

zj(tk)=pjj(t)rj(tk)+jipij(t)rj(tk)dj(t)-qjj(t)sj(tk)-jiqij(t)sj(tk)wj(tk).    (23)

Note that

(pii(s))-Hipii(s)mi(tk)(pii(s))+Hi(qii(s))-Kiqii(s)ni(tk)(qii(s))+Ki,

thus we can get

zj(tk)σj(t)wj(tk),    (24)

where

σj(t)=pjj(t)rj(tk)+jipij(t)rj(tk)dj(t)-qjj(t)sj(tk)-jiqij(t)sj(tk).    (25)

Combining (18) and (25), we can observe

      i=1nςi|wi(t)|j=1n{|1-tkt[cj(s)-ajj+(s)Fj]ds      +jiςiςjtkt[|aij(s)|Fj]ds|}ςj|wj(tk)|      +j=1n{|tkt[bjj+(s)Gj]ds+jiςiςjtkt[|bij(s)|Gj]ds|}      σj(t)ςj|wj(tk)j=1n|1-tkt[μj(ξ,s)ds-δvj(ξ,s)]ds|ςj|wj(tk)|,    (26)

since the equality (9) occurs at t = tk+1, thus we have

i=nnςi|wi(tk+1)|(1-εa)i=nnςi|wi(tk)|,

which implies

limt+||w(tk)||1,ξ=0.

In addition, for each t ∈ (tk, tk+1), from the rule (9), the inequality (16) implies that ||w(t)||1,ξ ≤ ||w(tk)||1,ξ. Hence, it holds

limt+||w(t)||1,ξ=0,

then from condition (11), we have

limt+||z(t)||1,ξ=0.

The out-synchronization of the system (4) is proved.

Remark 3.1. The sampling interval is positive. Each interval has a common positive lower bound based on (12). This result avoids the Zeno phenomenon during sampling.

Under decentralized principles, this section will consider the system below

{dwi(t)dt=ci(t)wi(tki)+j=1naij(t)f¯j(tkj)+j=1nbij(t)g¯j(tkj)0=di(t)zi(tki)+j=1npij(t)h¯j(tkj)+j=1nqij(t)k¯j(tkj),    (27)

for all t[tkj,tk+1j), i = 1, …, n and k = 0, 1, 2, ⋯ .

Remark 3.2. To further illustrate the mechanism of decentralizing the sampling data, let lk be the time point at which events are updated across the network. Then lk are satisfied as follows

{lk}k=0+=k=0+i=1ntki,

The following theorem gives conditions that guarantee the convergence of system (27) via l1 norm.

Theorem 2. Let εb ∈ (0, 1), ε0 > 0, and b ≤ ε0 set at a time point tk, the state will renew information until the following condition holds:

tk+1i=supτtki{τ:minj=1,,ntkit[μj(ξ,s)            δvj(ξ,s)]dsεb,t(tki,τ]}    (28)

for i = 1, 2, …, n and k = 0, 1, 2, ⋯ , then the condition guarantees the system (4) to reach outer-synchronization.

Proof. Just similar to (12)

εbNtk+1i-tkiεbε0,    (29)
εbtkitk+1i[μj(ξ,s)-δvj(ξ,s)]dsN(tk+1i-tki)Nεbε01.    (30)

Consider wi(t) for any neuron i at triggering time tk+1i, where i = 1, …, n, we have

     i=1nςi|wi(tk+1i)|=i=1nsign(wi(tk+1i))ςi[wi(tki)+tkitk+1iw˙i(s)ds],=i=1nsign(wi(tk+1i))ςiwi(tki)i=1nsign(wi(tk+1i))    ςitkitk+1i[ciwi(tki)+j=1naij(s)f¯j(w(tkj))    +j=1nbij(s)g¯j(z(tkj))]ds,=i=1nsign(wi(tk+1i))ςiwi(tki)    i=1nsign(wi(tk+1i))ςiwi(tki)tkitk+1icids    +i=1nsign(wi(tk+1i))ςif¯i(w(tkj))tkitk+1iaii+(s)ds    +i=1nsign(wi(tk+1i))ςig¯i(z(tkj))tkitk+1ibii+(s)ds    +jisign(wi(tk+1i))ςii=1nf¯j(w(tkj))tkitk+1iaij+(s)ds    +jisign(wi(tk+1i))ςij=1ng¯j(z(tkj))tkitk+1ibij+(s)ds,    (31)

then, it holds

        i=1nςi|wi(tk+1i)|j=1n{1-tkjtk+1j[cj-Fjajj+(s)]ds        +Fjjiςiςjtkjtk+1j|aij(s)|ds}ςj|wj(tkj)|        +j=1n{tkjtk+1jGjbjj+(s)ds        +Gjjiςiςjtkitk+1i|bij(s)|ds}ςj|zj(tkj)|.    (32)

From static equation of system (27), we have

zj(tkj)=pjj(t)rj(tkj)+jipij(t)rj(tkj)dj-qjj(t)sj(tkj)-jiqij(t)sj(tkj)wj(tkj)       pjj+(t)Hj+ji|pij(t)|Hjdj-qjj+(t)Kj-ji|qij(t)|Kjwj(tkj)       =σj(t)wj(tkj).    (33)

Then, combining (32) and (33) we can get

     i=1nςi|wi(tk+1i)|{j=1n{1tkjtk+1j[cjFjajj+(s)]ds     +Fjjiςiςjtkitk+1i|aij(s)|ds}     +j=1n{tkjtk+1jGjbjj+(s)ds     +Gjjiςiςjtkitk+1i|bij(s)|}σj(t)}ςjwj(tkj)(1εb)ςjwj(tkj).    (34)

Based the triggering rule (28), we can obtain

i=1nςi|wi(tk+1i)|(1-εb)ςjwj(tkj),    (35)

which means

limtki+||w(tki)||1=0.

For any time t(tki,tk+1i], the state wi(t) becomes

     i=1nςi|wi(t)|j=1n{1tkjt[cjajj+(s)Fj]ds     +Fjjiςiςjtkit|aij(s)|ds+σj(t)Gjtkjtbjj+(s)ds     +σj(t)Gjjiςiςjtkjtbjj+(s)ds}ςj|wj(tk)|j=1n{1tkit[cjajj+(s)Fj]ds     +Fjjiςiςj[tkit|aij(s)|ds+tτ|aij(s)|ds]     +σj(t)Gjtkjtbjj+(s)ds     +σj(t)Gjjiςiςj[tkjtbjj+(s)ds     +tτ|bij(s)|ds]}ςj|wj(tk)|(1εb)w(tki)    (36)

where tk+1iτ>t>tki. Thus

||w(tk+1i)||1||w(t)||1(1-εb)||w(tki)||1,

for any t(tki,tk+1i] and i = 1, …, n, which implies

limt+||w(t)||1limtki+||w(tki)||1=0.

The proof for the out-synchronization of the system (4) is completed.

Remark 3.3. Theorems 1 and 2 are based on centralized and decentralized data sampling under the system structure, and the research process is extremely dependent on the structural characteristics of the model.

3.2. State-dependent centralized and decentralized data sampling

In this section, we established a sampling control mechanism according to the state characteristics of the system. Under this sampling mechanism, neurons transmit and update information at the next triggering time point.

In system (6), the state measurement error is defined as

ei(t)=wi(tk)-wi(t),

and the state measurement of static equation is as follows:

ηi(t)=zi(tk)-zi(t),

where t ∈ [tk, tk+1), i = 1, …, n and k = 0, 1, 2, …

Theorem 3. Let ℏ(t) be a positive decreasing continuous function on [0, +∞) with φ(0) > 0. Set tk+1 as the triggering time point such that

tk+1=maxıtk{ı:||e(t)||(t),t(tk,ı)},    (37)

for i = 1, …, n and k = 0, 1, 2… There exist ςi, if μj(ξ, t) ≥ ε3,

limt+(t)=0,

then, the system (4) reaches out-synchronization.

Proof. Consider wi(t) for any neuron i(i = 1, …, n) and ςi > 0(i = 1, …, n)

     dw(t)1dt=i=1nςisign(wi(t))dwi(t)dt=i=1nςisign(wi(t))[ciwi(tk)+j=1naij(t)f¯j(wj(tk))     +j=1nbij(t)g¯j(zj(tk))],=i=1nςisign(wi(t))[ciwi(t)+j=1naij(t)f¯j(wj(t))     +j=1nbij(t)g¯j(zj(t))]+i=1nςisign(wi(t))[ci(wi(tk)wi(t)]     +i=1nςisign(wi(t))j=1naij(t)[f¯j(wj(tk))f¯j(wj(t))]     +i=1nςisign(wi(t))j=1nbij(t)[g¯j(zj(tk))g¯j(zj(t))],    (38)

from (38), it holds

       d||w(t)||1dt=i=1nςisign(wi(t))d||wi(t)||dt-i=1nςici|wi(t)|+ςici|ei(t)|+i=1nςiaii+(t)Fi|wi(t)|       +i=1nςibii+(t)Gi|zi(t)|+i=1nςiaii+(t)Fi|ei(t)|       +i=1nςibii+(t)Gi|ηi(t)|+j=1njiςi|aij|Fj|wj(t)|       +j=1njiςi|aij|Fj|ej(t)|+j=1njiςi|bij|Gj|zj(t)|       +j=1njiςi|bij|Gj|ηj(t)|,=-j=1n[cj-Fjajj+(t)-Fjjiςiςj|aij(t)|]ςj|wj(t)|       +j=1n[cj+Fjajj+(t)+jiςiςj|aij(t)|]ςj|ej(t)|       +[Gjbjj+(t)+Gjjiςiςj|bij(t)|]ςj|zj(t)|       +[Gjbjj+(t)+Gjjiςiςj|bij(t)|]ςj|ηj(t)|.    (39)

For static equation, we have

di(zi(tk)-zi(t))=j=1npij(t)[h̄j(wj(tk))-h̄j(wj(t))]                 +j=1nqij(t)[k̄j(zj(tk))-k̄j(zj(t))],    (40)

so, we can get

diηi(t)j=1npij(t)Hjej(t)+j=1nqij(t)Kjηj(t),    (41)

we can also get

di(t)|ηi(t)|[pjj+(t)Hj+ji|pij(t)|Hj]|ej(t)|             +[qjj+(t)Kj+ji|qij(t)|Kj]|ηj(t)|,    (42)

so,

|ηj(t)|pjj+(t)Hj+ji|pij(t)|Hjdj(t)-qjj+(t)Kj-ji|qij(t)|Kj|ej(t)|     =σj(t)|ej(t)|.    (43)

From (12) and (13), we can get

     dw(t)1dtj=1n[cj(t)Fjajj+(t)Fjjiςiςj|aij(t)|]ςj|wj(t)|     +j=1n[cj(t)+Fjajj+(t)+Fjjiςiςj|aij(t)|]ςj|ej(t)|     +j=1nGjbjj+(t)+Gjjiςiςj|bij(t)|]σj(t)ςj|wj(t)|     +j=1nGjbjj+(t)+Gjjiςiςj|bij(t)|]σj(t)ςj|ej(t)|,    (44)

which implies

d||w(t)||1dt-μj(t)j=1nςj|wj(t)|+M1j=1nςj|ej(t)|                     +δvj(t)ςj|wj(t)|+δv(t)ςj|ej(t)|                  [-μj(t)+δv(t)]||w(t)||1                     +[M1+δv(t)](t)                  -ε3||w(t)||+(M1+δM2)(t),    (45)

for M1, M2 > 0, then we have

     ||w(t)||||w(t0)||e-ε2(t-t0)+(M1+δM2)t0te-ε2(t-s)||(s)||ds=e-ε2(t-t0)[||w(t0)||+(M1+δM2)t0teε2(s-t0)||(s)||ds],    (46)

for s ∈ [t0, t], t ∈ [tk, tk+1).

Based on the LHospital rule, we have

limt+||w(t)||=limt+M1+δM2eε2(t-t0)t0teε3(s-t0)||(s)||ds,                       =limt+M1+δM2ε3||(t)||,                       =0.    (47)

This means that the system (4) achieve outer-synchronization. The proof is completed.

In system (27), the state measurement error is defined as

ei(t)=wi(tki)-wi(t),

and the state measurement of static equation is as follows:

ηi(t)=zi(tki)-zi(t),

where t[tki,tk+1i),i=1,,n and k = 0, 1, 2, …. The push-based decentralized updating rule is given as follows.

Theorem 4 Let ȷ(t) be a positive decreasing continuous function on [0, +∞) with ϕ(0) > 0. Set tk+1i as the triggering time point such that

tk+1i=supıtki{ı:|ei(t)|ȷi(t),t(tki,ı)},    (48)

for i = 1, ⋯ , n and k = 0, 1, 2, …, there exist ςi, if μj(ξ, t) ≥ ε4,

limt+ȷi(t)=0,

then the system (4) reaches out-synchronization.

Proof. Consider wi(t) for any neuron i(i = 1, …, n) and ςi > 0(i = 1, …, n)

      dw(t)1dt=i=1nςisign(wi(t))dwi(t)dt=i=1nςisign(wi(t))[ci(t)wi(tki)+j=1naij(t)f¯j(wj(tkj))      +j=1nbij(t)g¯j(zj(tkj))],=i=1nςisign(wi(t))[ci(t)wi(tki)+j=1naij(t)f¯j(wj(t))      +j=1nbij(t)g¯j(zj(t))]      +i=1nςisign(wi(t))[ci(wi(tki)wi(t))]      +i=1nςisign(wi(t))j=1naij(t)[f¯j(wj(tkj))f¯j(wj(t))]      +i=1nςisign(wi(t))j=1nbij(t)[g¯j(zj(tkj))g¯j(zj(t))],    (49)

from (49), it holds

       d||w(t)||1dt=i=1nςisign(wi(t))d||wi(t)||dt-i=1nςici|wi(t)|+ςici|ei(t)|+i=1nςiaii+(t)Fi|wi(t)|       +i=1nςibii+(t)Gi|zi(t)|+i=1nςiaii+(t)Fi|ei(t)|       +i=1nςibii+(t)Gi|ηi(t)|+j=1njiςi|aij|Fj|wj(t)|       +j=1njiςi|aij|Fj|ej(t)|+j=1njiςi|bij|Gj|zj(t)|       +j=1njiςi|bij|Gj|ηj(t)|-j=1n[cj-Fjajj+(t)-Fjjiςiςj|aij(t)|]ςj|wj(t)|       +j=1n[cj+Fjajj+(t)+jiςiςj|aij(t)|]ςj|ej(t)|       +[Gjbjj+(t)+Gjjiςiςj|bij(t)|]ςj|zj(t)|       +[Gjbjj+(t)+Gjjiςiςj|bij(t)|]ςj|ηj(t)|.    (50)

For static equation, we have

      di(zi(tkj)-zi(t))=j=1npij(t)[h̄j(wj(tkj))-h̄j(wj(t))]      +j=1nqij(t)[k̄j(zj(tkj))-k̄j(zj(t))],    (51)

so, we can get

diηi(t)j=1npij(t)Hjej(t)+j=1nqij(t)Kjηj(t),    (52)

we also can get

di|ηi(t)|[pjj+Hj+ji|pij(t)|Hj]|ej(t)|            +[qjj+Kj+ji|qij(t)|Kj]|ηj(t)|,    (53)

so,

|ηj(t)|pjj+(t)Hj+ji|pij(t)|Hjdj-qjj+(t)Kj-ji|qij(t)|Kj|ej(t)|            =σj(t)|ej(t)|.    (54)

From (50) and (54), we can get

      dw(t)1dtj=1n[cjFjajj+Fjjiςiςj|aij(t)|]ςj|wj(t)|      +j=1n[cj+Fjajj++Fjjiςiςj|aij(t)|]ςj|ej(t)|      +j=1nGjbjj++Gjjiςiςj|bij(t)|]σj(t)ςj|wj(t)|      +j=1nGjbjj++Gjjiςiςj|bij(t)|]σj(t)ςj|ej(t)|,    (55)

which implies

d||w(t)||1dt-μ1(t)j=1nςj|wj(t)|+M1j=1nςj|ej(t)|                  +δv(t)ςj|wj(t)|+δv(t)ςj|ej(t)|            [-μj(t)+δv(t)]||w(t)||1                  +[M1+δM2]j=1nςjȷj(t)            -ε4||w(t)||+[M1+δM2]||ȷ(t)||,    (56)

then, we have

w(t)w(t0i)eε4(tt0i)+(M1+δM2)                    t0iteε4(ts)(s)ds              =eε4(tt0i)[w(t0i)+(M1+δM2)                    t0iteε4(st0i)(s)ds],    (57)

for s[t0i,t],t[tki,tk+1i).

Based on the LHospital rule, we have

limt+||w(t)||=limt+M1+δM2eε4(t-t0i)t0teε4(s-t0i)||ȷ(s)||ds,               =limt+M1+δM2ε4||ȷ(t)||,               =0.    (58)

The proof is completed.

Remark 3.4 Theorems 3 and 4 make use of centralized and decentralized data sampling principles by relying on state variables. The systematic errors are used to estimate the character of system synchronization.

Remark 3.5 Sampling control is a hot topic that cannot be ignored in the field of control theory. The significance of the control method lies in the size of the sampling interval and trigger condition. As long as the trigger conditions are met, information transmission can be started.

Remark 3.6 In this paper, we study the outer-synchronization of ARTNNs, which are described by DAS. The DAS here is the system of index-1. The method is to build an acceptable sampling mechanism to make the system achieve outer-synchronization, where the sample interval is fixed and bounded. Therefore, in light of this work, the following next research directions are suggested: (1) Higher index DASs can be used as future research directions. (2) Intermittent sampling and random sampling can be used as sampling mechanisms. (3) Fractional systems can be considered.

Remark 3.7 The research purpose of this paper is to create a workable sampling technique that will enable the system to achieve outer-synchronization. The approach does have certain drawbacks. (1) High index systems cannot use this strategy; it is only applicable to DASs with index-1. Differential equations cannot be created linearly from high index DASs. (2) Many of the equalities in the study require the upper and lower bounds, and the excitation function in the system must satisfy constraints that are equivalent to or even more stringent than the Lipschitz condition. (3) Integral inequalities are used to draw inferences utilizing data sampling techniques that heavily rely on the system's structure. As a result, the system's model structure imposes a major restriction on the approach used in this research.

4. A numerical example

In this section, a numerical simulation demonstrates the effectiveness of the conclusions.

4.1. Example description

Example.

{dx1(t)dt=c1(t)x1(t)+a11(t)f1(x1(t))+a12(t)f2(x2(t))      +b11(t)g1(y1(t))+b12(t)g2(y2(t))+J1      0=d1(t)y1(t)+p11(t)h1(x1(t))+p12(t)h2(x2(t))      +q11(t)k1(y1(t))+q12(t)k2(y2(t))+I1dx2(t)dt=c2(t)x2(t)+a21(t)f1(x1(t))+a22(t)f2(x2(t))      +b21(t)g1(y1(t))+b22(t)g2(y2(t))+J2      0=d2(t)y2(t)+p21(t)h1(x1(t))+p22(t)h2(x2(t))      +q21(t)k1(y1(t))+q22(t)k2(y2(t))+I2    (59)
A=(a11(t)a12(t)a21(t)a22(t))=(-1.2001.2),B=(b11(t)b12(t)b21(t)b22(t))=(0.3-0.6-1.2-0.2),P=(p11(t)p12(t)p21(t)p22(t))=(1.4001.4),Q=(q11(t)q12(t)q21(t)q22(t))=(0.3-0.6-1.2-0.2),C=(c1(t)c2(t))=(1.61.6),D=(d1(t)d2(t))=(-0.5-0.5),J=(J1J2)=(0.20.2),I=(I1I2)=(0.10.1),f(x)=h(x)=11+e-x,g(y)=k(y)=11+e-y,

then, let Fj=Gj=Hj=Kj=12 and ξ1 = ξ2 = 1. We can calculate that,

maxsup{μj(ξ,t)=cj(t)-Fjajj+(t)-Fjςiςj|aij|}=1.81min{δj(t)}=pjj+(t)Hj+Hjij|pij(t)|dj(t)-qjj+(t)-Kjij|qij(t)|=-0.7,max{vj(ξ,t)}=Gjbjj+(t)+Gjijςiςj|bij(t)|=0.75,M1=maxijnsuptt0{cj(t)+Fjajj+(t)+Fjjiςiςj|aij(t)|}=1.6,M2=maxijnsuptt0{Gjbjj+(t)+Gjijςiςj|bij(t)|}=0.75,

then,

sup{μj(ξ,t)-δvj(ξ,t)}=1.81-(-0.5)*0.75=2.185,

Fix N = 2.2, ε0 = 0.5. Then, εa = 0.7, εb = 0.2, and the following inequality holds by calculation,

Nεaε0(2-εa),Nεbε0.

Fix ȷ(t)=(t)=1t+1, satisfied

limt+ȷ(t)=0,limt+(t)=0,andlimt+ϕ2(t)=0.

4.2. Simulation results

In Figure 1, the state variable (x1(t), x2(t)) takes nine sets of initial values in turn and they are (0.1, 0.1), (0.1, 0.3), (0.1, 0.5), (0.3, 0.1), (0.3, 0.3), (0.3, 0.5), (0.5, 0.1), (0.5, 0.3), and (0.5, 0.5). Based on AEs of the model (59), the value of an algebraic variable (y1(t), y2(t)) is certain. It can be seen from the figure that the state function curve starting from any initial value point reaches the outer-synchronization.

FIGURE 1
www.frontiersin.org

Figure 1. States x1(t) and x2(t) of model (59) with consistent initial values which x1(0), x2(0) ∈ {0.1, 0.3, and 0.5}.

From a geometrical point of view, this means that the state function from any initial value will converge to the stable equilibrium point. When we consider a larger range of initial values, the same evolutionary trend can still be seen. Figure 2 shows that the phase curves (x1(t), x2(t)) from different initial values converge to the stable equilibrium point (−0.248, 0.390).

FIGURE 2
www.frontiersin.org

Figure 2. Phase diagram of differential states of model (59).

Consider the data-sampling principles, the data can be collected at certain time intervals for the time t. Figures 3, 4 show the evolution trend of the state function and system error when the sampling interval is 12 and 16. By comparing Figures 3, 4, it can be seen that when the sampling time interval is smaller, the error between the sampling system and the original system will be smaller, but outer-synchronization will be ultimately achieved.

FIGURE 3
www.frontiersin.org

Figure 3. Evolution trend of state variables of the original system and the error system under centralized control style.

FIGURE 4
www.frontiersin.org

Figure 4. Evolution trend of state variables of original system and error system under centralized control style.

Figures 5, 6 show the evolution trend of the state curve and the error range before and after systematic sampling under the decentralized data sampling principle, respectively. The sampling intervals of time t in Figure 5 are 13 and 14 and those in Figure 6 are 18 and 116. Comparing Figures 5, 6, it is also observed that when the sampling time interval is smaller, the error between the sampling system and the original system will be smaller, but outer-synchronization will be achieved in the end. Figure 7 shows the release time point and release time interval.

FIGURE 5
www.frontiersin.org

Figure 5. Evolution trend of state variables of original system and error system under decentralized control style.

FIGURE 6
www.frontiersin.org

Figure 6. Evolution trend of state variables of the original system and the error system under centralized control style.

FIGURE 7
www.frontiersin.org

Figure 7. Release time point and release time interval.

It can be observed from the simulation results that no matter which sampling method is used, as the conditions of Theorems 1 − 4 are satisfied, the system can be reached outer-synchronized with the premise of more cost savings.

4.3. Simulation steps

The numerical simulation in this section is carried out according to the following steps:

Step 1 Define the original NN described by DAS, where the independent variable of the state function is a continuous-time variable t.

Step 2 Determine the initial values of state variables and their derivatives and check the initial value compatibility.

Step 3 The ARTNN model represented by the DAS is regarded as an implicit DE system, and the solutions of the original system are solved by using the implicit DE.

Step 4 Define the sampling function and replace the variables t in the original system.

Step 5 By derivation, the AEs in the original system (1) are transformed into equivalent DEs, and the original DAS (1) is transformed into the equivalent differential system (2).

Step 6 Solving the equivalent differential System (2) by using a method of neutral-type time-delay DE.

Step 7 Compare the solutions of the original DAS (1) and the equivalent differential system (2).

Remark 4.1 The initial values of DAS (1) and the equivalent differential system (2) are the same, so the initial value of the solution in the sixth step is the same as the initial value of the original system in the second step.

5. Conclusion

In this research, we demonstrated that outer-synchronization of ARTNN may be achieved through the application of suitable centralized and decentralized data sampling procedures. These theoretical results enhanced and enriched relevant research already in existence. By establishing suitable sampling techniques, sufficient conditions for the outer-synchronization of the system are obtained in this study. The positive lower bound of the sampling interval ensured that the system will not encounter the Zeno phenomenon during the sampling procedure. This paper contains ideas for future discussion: (1) outer-synchronization of ARTNN taking both conservatism and complexity into account; (2) analysis of outer-synchronization of ARTNN subject to stochastic disturbance; (3) how to increase the sampling interval so that the results obtained by the error system are consistent with the original system.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

PL: numerical simulation and description. QL: drafting the manuscript. ZL: full text proofreading. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by the Natural Science Foundation of China under Grant 61773152 and Hubei Provincial Department of Education under Grant No. 2018CFB532.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ansari, M. S. (2022). A single-layer asymmetric RNN with low hardware complexity for solving linear equations. Neurocomputing 485, 74–88. doi: 10.1016/j.neucom.2022.01.033

CrossRef Full Text | Google Scholar

Berger, T., and Reis, T. (2017). Observers and dynamic controllers for linear differential-algebraic systems. SIAM J. Control Optim. 55, 3364–3591. doi: 10.1137/15M1035355

CrossRef Full Text | Google Scholar

Bill, D. J., and Mareels, I. M. (1990). Stability theory for differential/algebraic systems with application to power systems. IEEE Trans. Circ. Syst. 37, 1416–1423. doi: 10.1109/31.62415

CrossRef Full Text | Google Scholar

Chang, B., Chen, M. M., Haber, E., and Chi, E. (2019). Antisymmetric RNN: a dynamical sysgem view on recurrent nural networks. arXiv preprint arXiv:1902, 09689. doi: 10.48550/arXiv.1902.09689

CrossRef Full Text | Google Scholar

Chen, J. J., Chen, B. S., and Zeng, Z. G. (2021). Exponential quasi-synchronization of coupled delayed memristive neural networks via intermittent event-triggered control. Neural Netw. 141, 98–106. doi: 10.1016/j.neunet.2021.01.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, P., and Han, Q. L. (2013). “Output-based event-triggered H control for sampled-data control systems with nonuniform sampling,” in 2013 American Control Conference (Washington, DC), 1727–1732.

Google Scholar

Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., et al. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406, 1078. doi: 10.3115/v1/D14-1179

CrossRef Full Text | Google Scholar

Constantinos, P. C. (1988). The consistent initialization of differential-algebraic systems. SIAM J. Sci. Stat. Comput. 9, 213–231. doi: 10.1137/0909014

CrossRef Full Text | Google Scholar

Esposito, W. R., and Floudas, C. A. (2000). Global optimization for the parameter estimation of differential-algebraic systems. Ind. Eng. Chem. Res. 39, 1291–1310. doi: 10.1021/ie990486w

CrossRef Full Text | Google Scholar

Federico, M., and Zárate-Miñano, R. (2013). A systematic method to model power systems as stochastic differential algebraic equations. IEEE Trans. Power Syst. 28, 4537–4544. doi: 10.1109/TPWRS.2013.2266441

CrossRef Full Text | Google Scholar

Hu, L. Y., and Hu, X. H. (2019). Adaptive neural finite-time stabilisation for a class of p-normal form nonlinear systems with unknown virtual control coefficients. Int. J. Control 94, 1386–1401. doi: 10.1080/00207179.2019.1651454

CrossRef Full Text | Google Scholar

Kong, F. C., Zhu, Q. X., Sakthivel, R., and Mohammadzadeh, A. (2021). Fixed-time synchronization analysis for discontinuous fuzzy inertial neural networks with parameter uncertainties. Neurocomputing 422, 295–313. doi: 10.1016/j.neucom.2020.09.014

CrossRef Full Text | Google Scholar

Liu, J. B., Bao, Y., Zheng, W. T., and Hayat, S. (2021). Network coherence analysis on a family of nested weighted n-polygon networks. Fractals 29, 1–15. doi: 10.1142/S0218348X21502601

CrossRef Full Text | Google Scholar

Liu, J. B., Zhao, J., and Cai, Z. Q. (2020). On the generalized adjacency, Laplacian and signless Laplacian spectra of the weighted edge corona networks. Physica A 540, 123073. doi: 10.1016/j.physa.2019.123073

CrossRef Full Text | Google Scholar

Liu, J. B., Zhao, J., He, H., and Shaom, Z. (2019). Valency-based topological descriptors and structural property of the generalized Sierpinski Networks. J. Stat. Phys. 177, 1131–1147. doi: 10.1007/s10955-019-02412-2

CrossRef Full Text | Google Scholar

Liu, P., Zeng, Z. G., and Wang, J. (2018). Global synchronization of coupled fractional-order recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 30, 2358–2368. doi: 10.1109/TNNLS.2018.2884620

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, Y. J., Park, J. H., Guo, B. Z., and Shu, Y. J. (2017). Further results on stabilization of chaotic systems based on fuzzy memory sampled-data control. IEEE Trans. Fuzzy Syst. 26, 1040–1045. doi: 10.1109/TFUZZ.2017.2686364

CrossRef Full Text | Google Scholar

Lu, W. L., Zheng, R., and Chen, Y. P. (2016). Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling. Nural Netw. 75, 22–31. doi: 10.1016/j.neunet.2015.11.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Lv, L., Wu, Z., Zhang, J., Zhang, L., Tan, Z., and Tian, Z. (2022b). A VMD and LSTM based hybrid model of Load forecasting for power grid security. IEEE Trans. Ind. Inform. 18, 6474–6482. doi: 10.1109/TII.2021.3130237

CrossRef Full Text | Google Scholar

Lv, L., Wu, Z., Zhang, L., Gupta, B. B., and Tian, Z. (2022a). An edge-AI based forecasting approach for improving smart microgrid efficiency. IEEE Trans. Indu. Inform. 2022, 3163137. doi: 10.1109/TII.2022.3163137

CrossRef Full Text | Google Scholar

Lv, L. L., Chen, J. B., Zhang, L., and Zhang, F. R. (2022). Gradient-based1 neural networks for solving periodic Sylvester matrix equations. J. Franklin Instit. doi: 10.1016/j.jfranklin.2022.05.023 (in press).

CrossRef Full Text | Google Scholar

Mao, J. H., Xu, W., Yang, Y., Wang, J., Huang, Z., Yuille, A., et al. (2014). Deep captioning with multimodal recurrent neural netwowks (MRNN). arXiv preprint arXiv:1412, 6632. doi: 10.48550/arXiv.1412.6632

CrossRef Full Text | Google Scholar

Shi, H., Xu, M. H., and Li, R. (2017). Deep learning for household load forecasting—A novel Pooling deep RNN. IEEE Trans. Smart Grid 9, 5271–5280. doi: 10.1109/TSG.2017.2686012

CrossRef Full Text | Google Scholar

Su, H. S., Wang, Z. J., Song, Z. Y., and Chen, X. (2017). Event−triggered consensus of non−linear multi−agent systems with sampling data and time delay. IET Control Theory Apppl. 11, 1715–1725. doi: 10.1049/iet-cta.2016.0865

CrossRef Full Text | Google Scholar

Syed Ali, M., Usha, M., Orman, Z., and Arik, S. (2019). Improved result on state estimation for complex dynamical networks with time varying delays and stochastic sampling via sampled-data control. Neural Netw. 114, 28–37. doi: 10.1016/j.neunet.2019.02.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, Y. B., Zhu, J. L., and Li, W. X. (2019). Intermittent discrete observation control for synchronization of stochastic neural networks. IEEE Trans. Cybern. 50, 2414–2424. doi: 10.1109/TCYB.2019.2930579

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. M., Gao, T., Wang, J., Deng, B., Lansdell, B., and Linares-Barranco, B. (2021a). Efficient spike-driven learning with dendritic event-based processing. Front. Neurosci. 15, 601109. doi: 10.3389/fnins.2021.601109

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. M., Gao, T. B., Deng, M., Rahimi Azghadi, M., and Linares-Barranco, B. (2021b). Neuromorphic context-dependent learning framework with fault-tolerant spike routing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–15. doi: 10.1109/TNNLS.2021.3084250

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. M., Gao, T. J., Wang, B., Deng, M. R., Azghadi, T., and Lei, B. Linares-Barranco, B. (2022c). SAM: a unified self-adaptive multicompartmental spiking neuron model for learning with working memory. Front. Neurosci. 16, 850945. doi: 10.3389/fnins.2022.850945

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. M., Linares-Barranco, B., and Chen, B. D. (2022b). Heterogeneous ensemble-based spike-driven few-shot online learning. Front. Neurosci. 16, 850932. doi: 10.3389/fnins.2022.850932

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, S. M., Tan, J. T., and Chen, B. D. (2022a). Robust spike-based continual meta-learning improved by restricted minimum error entropy criterion. Entropy 24, 455. doi: 10.3390/e24040455

PubMed Abstract | CrossRef Full Text | Google Scholar

Yin, W. P., Kann, K., Yu, M., and Schütze, H. (2017). Comparative Study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702, 01923. doi: 10.48550/arXiv.1702.01923

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, L., Huang, Z., Liu, W., Guo, Z., and Zhang, Z. (2021a). Weather radar echo prediction method based on convolution neural network and long short-term memory networks for sustainable e-agriculture. J. Cleaner Product. 298, 126776. doi: 10.1016/j.jclepro.2021.126776

CrossRef Full Text | Google Scholar

Zhang, L., Huo, Y., Ge, Q., Ma, Y., and Ouyang, W. (2021b). A privacy protection scheme for iot big data based on time and frequency limitation. Wireless Commun. Mobile Comput. 2021, 1–10. doi: 10.1155/2021/5545648

CrossRef Full Text | Google Scholar

Zhang, L., Xu, C. B., Gao, Y. H., Han, Y., Du, X., Tian, Z., et al. (2020). Improved Dota2 lineup recommendation model based on a bidirectional LSTM. Tsinghua Sci. Technol. 25, 712–720. doi: 10.26599/TST.2019.9010065

CrossRef Full Text | Google Scholar

Keywords: asymmetric recurrent time-varying neural networks, differential-algebraic system, singular neural networks, data-sampling, outer-synchronization

Citation: Li P, Liu Q and Liu Z (2022) Outer-synchronization criterions for asymmetric recurrent time-varying neural networks described by differential-algebraic system via data-sampling principles. Front. Comput. Neurosci. 16:1029235. doi: 10.3389/fncom.2022.1029235

Received: 27 August 2022; Accepted: 24 October 2022;
Published: 17 November 2022.

Edited by:

Jia-Bao Liu, Anhui Jianzhu University, China

Reviewed by:

Guo Jiang, Hubei Normal University, China
Shuangming Yang, Tianjin University, China

Copyright © 2022 Li, Liu and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Qing Liu, fall198883@163.com

Download