Hybrid Neural Network Cerebellar Model Articulation Controller Design for Non-linear Dynamic Time-Varying Plants

This study proposes a hybrid method to control dynamic time-varying plants that comprises a neural network controller and a cerebellar model articulation controller (CMAC). The neural-network controller reduces the range and quantity of the input. The cerebellar-model articulation controller is the main controller and is used to compute the final control output. The parameters for the structure of the proposed network are adjusted using adaptive laws, which are derived using the steepest-descent gradient approach and back-propagation algorithm. The Lyapunov stability theory is applied to guarantee system convergence. By using the proposed combination architecture, the designed CMAC structure is reduced, and it makes it easy to design the network size and the initial membership functions. Finally, numerical-simulation results demonstrate the effectiveness of the proposed method.


INTRODUCTION
Nowadays, the control of non-linear systems is a topic that continues to attract many researchers because of its widespread applications. In many practical cases, the challenge of this topic is that its mathematical model is poorly known or uncertain . Furthermore, non-linear systems are susceptible to internal and external disturbances (Li et al., 2018). Therefore, in recent years, some studies have used neural networks (NNs) to approximate non-linear functions (Zhou and Zhang, 2015;Han, 2018). Some studies combined a neural network and other methods to achieve better control performance, such as proportional-integral-derivative (PID) NNs, fuzzy NNs, and sliding mode NNs (Zou et al., 2011;Zhou and Zhang, 2015;Lin and Le, 2017a;Zhao et al., 2018;Wang et al., 2019). Neural networks enable large-scale concurrent computing, processing, and adaptive weight adjustment, and they are simple and convenient (Prieto et al., 2016). Recently, many studies use NNs to address control problems, system identification and prediction problems. In 2013, Li et al. developed an optical-interference pattern-sensing method and neural-network classification for pretesting gap mura on thin-film transistor liquid crystal displays (Li et al., 2013). In 2017, Sun and Pan developed a reliable neural-network to control non-affine non-linear systems (Sun and Pan, 2017). In 2018, Wang et al. presented a memristor-based artificial neural network to predict house prices (Wang et al., 2018). However, neural networks require a considerable amount of computational resources, there is the risk of overfitting, and the architecture must be defined (Tu, 1996).
The concept of a cerebellar-model articulation controller (CMAC) was first proposed by Albus (1975). It is a type of neural network that uses a model of the mammalian cerebellum (associative memory). It addresses the problems of fast-growing size and the learning difficulties that are inherent to current neural networks. Several studies showed that, for applications that require online learning, CMACs perform better than simple neural networks (Lin and Chen, 2009;Guan et al., 2019). Since CMACs have a non-fully connected perceptron-like associativememory network with overlapping receptive fields, they have fast learning performance, and its computation is simple. Contrarily, neural networks have a fully connected perceptron; therefore, all weights are updated during each learning cycle, so the learning capacity for a neural network is essentially global in nature and slow (Lin and Chen, 2009). The main advantages of CMACs over NNs, MLPs, and RBFNs are fast learning, simple computation, and good generalization capability (Lin et al., 2013). Recent studies have proposed some modified CMACs, such as functionlink, self-organizing, and type-2 fuzzy CMACs that have better performance. In 2016, Lin et al. proposed a type-2 fuzzy CMAC for an adaptive filter (Lin et al., 2016). In 2017, Lin and Le used a wavelet CMAC to control non-linear systems (Lin and Le, 2017b). In 2018, Tsao et al. proposed the use of a deep CMAC for an adaptive noise-cancellation system (Tsao et al., 2018). A conventional CMAC also has some disadvantages, such as it is difficult to determine a suitable network size and to select the initial membership functions (MFs) to achieve the best performance (Lin and Chen, 2009). It is particularly difficult when the network has many inputs, and each input has a large range.
This study proposes a new method with a structure that includes a neural network connected in series with a CMAC. All inputs to the neural network reduce quantity and range. The outputs for the NN feed into the CMAC to compute the final outputs. This proposed network structure is referred to as a hybrid neural-network-CMAC (HNNCMAC). It is used to control dynamic time-varying plants. The motivation behind a cascade of two architectures was to allow for the inputs into the CMAC structure to be small, avoiding the difficulty in selecting a suitable network size and the initial membership functions. In the CMAC structure, the number of neurons in receptive-field spaces is increasing exponentially by the number of neurons in input space. Our proposed HNNCMAC controller using the NN to reduce the inputs for the CMAC, and then the structure of the modified CMAC in our proposed network will be smaller than the conventional CMAC. It is more effective when the number of inputs is large. In comparison with previous modified CMAC neural networks, as in Lin and Le (2017b) and Lin et al. (2018a,b), the proposed HNNCMAC has some advantages, such as small CMAC structure, and ease in designing network size and initial membership functions. The main contributions of this study are: (1) the successful design of an adaptive HNNCMAC system for the control of non-linear dynamic time-varying plants; (2) adaptive laws are derived using the steepest-descent gradient approach and a back-propagation algorithm; (3) input range and quantity in the proposed CMAC could be reduced by the NN pre-controller; (4) the stability of the proposed method is guaranteed by Lyapunov analysis; and (5) the method could be used for non-linear control problems, as proven by the results of numerical simulations.
The remaining sections of the paper are organized as follows. The design of the HNNCMAC is presented in section Methods. Section 3 presents the simulation results for controlling the dynamic time-varying plant. Section 4 provides the discussion. Finally, the conclusion is given in Section 5.

HNNCMAC Structure
The structure of hybrid NNCMAC includes a neural network that is connected in series with a CMAC. The NN reduces the range and the quantity of the input, and the output from the NN becomes the input for the CMAC to compute the final control output. Figure 1 shows the structure of HNNCMAC, which has seven spaces: input, hidden NN, output NN, association, receptive, weight-memory, and final-output spaces. These are described below.
(1) Input space I: There is no computation in this space. Input data from the dataset are fed into this space and directly transferred to the next space.
(2) Hidden NN space A: each node in this space performs a multiplication between vector input I = I 1 , I 2 , . . . , I n l T and hidden NN weight matrix h n a 1 , h n a 2 , . . . , h n a n l      ; after that, they are added with a bias α = α 1 , α 2 , . . . , α n a T , where n a is the number of nodes in the hidden NN space and n l is the number of nodes in the input space; h al is the connecting weight from the a th , a = 1, ..., n a , neuron in space A to the l th , l = 1, ..., n l neuron in space I. For example, in this space, the output from the x th node is derived as where α a is the bias of the x th neuron. Then, the output from this space is expressed as A = A 1 , A 2 , . . . , A n a T .
(3) Output NN space B: This is the output from the neuralnetwork space and it is the input for the CMAC. This layer performs a multiplication between the vector in the previous layer A = A 1 , A 2 , . . . , A n a T and output NN weight matrix To limit the input range for the CMAC, the final result in this space is a tangent sigmoid function. The output for the i th node is The output for this space is expressed as (4) Association space F: In this space, several elements are accumulated as a block. The membership grades in each block are calculated using input variables B i from the previous space and the Gaussian MFs.
. . , n j and k = 1, 2, . . . , n k (3) where m ijk is the mean; σ ijk is the variance of the k th block in the j th layer that corresponds to the i th input variable; n j is the number of layers; and n k is the number of blocks. Therefore, the output from this space is the vector association.
(7) Final output space O: This space performs the product operation of receptive-field space φ and weight-memory space W to obtain the final output for the HNNCMAC, which is expressed as The initial parameters for the HNNCMAC are chosen randomly and updated by some adaptive laws, which are derived using the steepest-descent gradient approach and a backpropagation algorithm, as described in the following section. The computational complexity using the Big-O notation is Big- . . , p n j ) + n j j=1 p j +n l n i n a ]), where T is the running time, p j is the number of membership functions in association space.

HNNCMAC Parameters-Learning Algorithm
The scheme for the HNNCMAC system is shown in Figure 3. The goal of control system is to generate control signal u HNNCMAC (t), which forces the output of dynamic time-varying plant y (t) to track reference signal y d (t). The flowchart of the HNNCMAC system is shown in Figure 4, in which input range and quantity in the proposed CMAC could be reduced by the NN pre-controller. Therefore, it can reduce the number of neurons in receptive-field spaces and the weight-memory space; then, the structure of CMAC can be significantly reduced. The high-order sliding mode from Manceur et al. (2012) and Zheng et al. (2014) is used to improve the performance of the control system . . . + λ n−1 e where λ and n are the slope and the order of the sliding surface, respectively. Both λ and n are positive constants. Tracking error e (t) is defined as: where y d and y are reference signal and system output, respectively. Taking the derivative of Equation (7) s (t) = e (n) + (n − 1) λe (n−1) + (n − 1) (n − 2) 2 λ 2 e (n−2) . . . + λ n−1 e = e (n) + K T e where K = (n − 1) λ, (n−1)(n−2) 2 λ 2 , ..., λ n−1 T ∈ ℜ n−1 is the positive gain vector and e (t) = e (n−1) (t) , e (n−2) (t) , ...,ė (t) T ∈ ℜ n is the tracking error vector.
If the values for n and λ correspond to the coefficients of a Hurwitz polynomial, then lim k→∞ e (t) = 0.
The structure of the HNNCMAC has seven variables that are updated as: w jkq , m ijk , σ ijk , b i , v ia , α a and h al . The Lyapunov cost function is chosen as V(t) = 1 2 s 2 (t), soV(t) = s (t)ṡ (t). An online learning gradient descent algorithm was used to minimizeV(t). Therefore, online tuning laws for the HNNCMAC parameters are given by the following equations: α a k + 1 =α a k + α a whereŵ jkq ,m ijk ,σ ijk ,b i ,v ia ,α a ,ĥ al are the estimation of the optimal values for parameters w jkq , m ijk , σ ijk , b i , v ia , α a , h al ; and The updating term in Equations (10-16) is obtained by backpropagation by using the following chain rules: Frontiers in Neuroscience | www.frontiersin.org whereη m ,η σ ,η b ,η v ,η a ,η h are the positive learning rates for the adaptive laws. Using this online tuning parameter, the HNNCMAC can adjust the parameters online to achieve desired performance.
Proof of the algorithm convergence: The Lyapunov cost function is defined as Therefore, the rate of change for Equation (24) is By using the Taylor expansion, the difference in the sliding hyperplane is From Equation (17), it can be seen that By using Equations (27) and (17), Equation (26) is rewritten as By using Equation (28), Equation (25) is rewritten as From Equation (29), if the learning rateη w is 0 <η w < 2 ξ 2 , then term V(t) is negative, and Lyapunov function V(t) > 0. Therefore, the convergence of the system is guaranteed by Lyapunov stability. A similar method is used to prove the stability of learning ratesη m ,η σ ,η b ,η v ,η a ,η h .

SIMULATION OF RESULTS
In this section, the performance of the proposed HNNCMAC is investigated. Three examples in control of the dynamic timevarying plants are considered. The dynamic time-varying plants are the plants that contain the parameters varying with time.
Example 1: Controlling a dynamic time-varying plant borrowed from Narendra and Parthasarathy (1990) and Abiyev and Kaynak (2010), which is described by the difference equation where u(t) is the control signal from the proposed HNNCMAC; y(t), y(t-1), and y(t-2) are measurable plant output, onestep delayed plant output, and two-step delayed plant output, respectively; ε(t) = 0.1 sin(πt) and y(t) = 0.1y(t), respectively, denote the external disturbances and the system uncertainties; f y(t − 1), y(t − 2) is the previous plant output function, which is given as f y(t − 1), y(t − 2) = y(t−1)y(t−2)(y(t−1)+2.5) 1+y(t−1) 2 +y(t−2) 2 The desired trajectory signal y d (t) is given as The desired trajectory signal and the system outputs for the dynamic time-varying plant are shown in Figure 5. Control signals and tracking errors are shown in Figures 6, 7, respectively. These results show that the HNNCMAC allows a time-varying plant to follow a specified trajectory signal. In terms of the performance of the control system, Table 1 shows a comparison of the root mean square error (RMSE) for the proposed method and other methods. Example 2: Controlling a dynamic time-varying plant borrowed from Zhang et al. (1998) and Abiyev and Kaynak (2010), which is described by the difference equation where u(t) is the control signal from the proposed HNNCMAC; y(t), y(t-1), and y(t-2) are measurable plant output, onestep delayed plant output, and two-step delayed plant output, respectively; ε(t) = 0.1 sin(πt) and y(t) = 0.1y(t), respectively, denote the external disturbances and the system uncertainties; f y(t − 1), y(t − 2) is the previous plant output function, which is given as f y(t − 1), y(t − 2) = b 1 (t) y (t − 1) + b 2 (t) y (t − 2); b 0 (t), b 1 (t), and b 2 (t) are the time-varying function, which are given as b 0 (t) = − time-varying plant parameters, which are given as The desired trajectory signal is given as (34) Figure 8 shows the change in the time-varying parameters. The desired trajectory signal and the outputs for the dynamic timevarying plant are shown in Figure 9. Control signals and tracking errors are shown in Figures 10, 11, respectively. Simulation   results showed that the HNNCMAC allows a time-varying plant to follow the reference signal, even if there are abrupt changes in parameters a 1 and a 2 . Table 1 shows a comparison of the RMSE for the proposed method and other methods. Example 3: Controlling a dynamic time-varying plant to follow variable frequency signals.
This example uses the same dynamic time-varying plant that is described in Example 2. The desired trajectory signal is the  variable frequency signal: and y d2 (t) = 5 * sin(2πtk t ) where square and sin are the square function and the sinusoidal function, respectively, and k t is the parameter for changing the signal frequency, which changes by time as follows: By using the square signal with varying frequency in Equation (35) as the desired trajectory, the reference signal and the system outputs for the time-varying plant are shown in Figure 12.
The control signals and tracking errors for this case are shown in Figures 13, 14, respectively. Figure 15 shows the reference signals and system outputs for the time-varying plant when the desired trajectory is the sinusoidal signal with varying frequency in Equation (36). The control signals are plotted in Figure 16, and the tracking errors are plotted in Figure 17. Simulation results for the sinusoidal function reference showed that, at the beginning of the control process, the proposed controller could control the system well, but as frequency increases with time, as well as when the time-varying plant parameters suddenly change, the tracking error also rises due to the controller needing time to adapt to these changes. As shown in Figures 7, 11, 14, 17, there were some rapid variation errors at the time the reference signals or the time-varying plant parameters suddenly changed. However, our proposed controller showed better ability to adapt to these changes, and the tracking error using our proposed HNNCMAC could quickly converge better than other control methods can. The external disturbances and the system uncertainties in this case are chosen as ε(t) = 0.8 sin(πt) and y(t) = 0.3y(t), respectively. A comparison of the RMSE for the following variable-frequency signal is shown in Table 1.

DISCUSSIONS
For this control problems, the HNNCMAC structure had three neurons in the input space, 10 neurons in the hidden space, FIGURE 12 | Comparison of system outputs between proposed HNNCMAC and other control methods for square signal reference with varying frequency.    and two neurons in output space NN. The association space had two layers, each of which with five Gaussian membership functions. The input for the HNNCMAC control system was the output from the sliding hyperplane, its one-step delayed, and its derivatives, s (t), s (t − 1), andṡ (t). Term s (t − 1) is used to obtain more information about the time-varying plants. proposed controller can address well the external disturbances and the system uncertainties. The convergence of our proposed controller is guaranteed by Lyapunov stability analysis approach in Equation (29). The average RMSE for all examples between the proposed HNNCMAC, the multilayer perceptron NN (MPNN), the conventional CMAC, the interval type-2 Petri CMAC (IT2PCMAC) (Le et al., 2019), and the type-2 Takagi-Sugeno-Kang fuzzy neural system (T2TSKFNS) (Abiyev and Kaynak, 2010) are shown in Table 1. It is obvious that the proposed controller was using the NN to reduce the inputs for the CMAC; then, the structure of the modified CMAC in our proposed network would be smaller than that of a conventional CMAC. It is more effective when the number of inputs is large. Table 1 shows that the proposed controller has a small computation time than a conventional CMAC, due to our modified CMAC structure was using the NN pre-controller to reduce the computation complexity of the CMAC. Moreover, the NN output used the tangent sigmoid function to limit the output from [−1 1]. Therefore, it is easy to design the network size and the initial membership functions in our modified CMAC controller. As shown in Table 1, the proposed HNNCMAC algorithm could achieve better control performance with the smallest RMSE than other controllers could. In Appendix A, Tables A-D show analysis of the difference between our proposed controller and other controllers using the t-Test statistical approach. In all examples, statistical results showed that the P-value was lower than the alpha level (α = 0.05). Thus, we can conclude that the RMSE results of our proposed controller had statistically significant difference with other controllers. Therefore, the superiority of the proposed controller was illustrated. Some real-world applications, which have large inputs, can apply this proposed network to reduce the network structure such as medical diagnosis problems, classification problems, image processing problems, etc. Choosing the parameters for the sliding surface affects much of the control performance. This study used the try-and-error approach to obtain suitable parameters. Further studies should investigate the estimation method to estimate these parameters to achieve better control performance.

CONCLUSIONS
This paper proposed an HNNCMAC that is used to control a non-linear dynamic time-varying plant. The main contributions of this study are that it demonstrated a method to control a nonlinear dynamic time-varying plant; the HNNCMAC structure uses adaptive laws to adjust parameters online; input range and quantity in the proposed CMAC can be reduced by the NN pre-controller, and it makes it easy to design network size and initial membership functions; the stability of the proposed method is guaranteed by Lyapunov analysis and the numericalsimulation results for controlling a time-varying plant, showing the superiority of the proposed method over existing methods. Moreover, our proposed controller is simple to design and implement, and can be applied to other fields such as system identification, classification, and prediction. Our future work will apply the optimal algorithm to optimize parameters in the sliding surface and learning rates in adaptive laws to achieve better control performance.

DATA AVAILABILITY STATEMENT
All datasets generated/analyzed for this study are included in the article/Supplementary Material.