# Hybrid Neural Network Cerebellar Model Articulation Controller Design for Non-linear Dynamic Time-Varying Plants

^{1}Faculty of Mechanical and Aerospace, Sejong University, Seoul, South Korea^{2}Department of Electrical Electronic and Mechanical Engineering, Lac Hong University, Bien Hoa, Vietnam^{3}Department of Electrical Engineering, Yuan Ze University, Taoyuan, Taiwan

This study proposes a hybrid method to control dynamic time-varying plants that comprises a neural network controller and a cerebellar model articulation controller (CMAC). The neural-network controller reduces the range and quantity of the input. The cerebellar-model articulation controller is the main controller and is used to compute the final control output. The parameters for the structure of the proposed network are adjusted using adaptive laws, which are derived using the steepest-descent gradient approach and back-propagation algorithm. The Lyapunov stability theory is applied to guarantee system convergence. By using the proposed combination architecture, the designed CMAC structure is reduced, and it makes it easy to design the network size and the initial membership functions. Finally, numerical-simulation results demonstrate the effectiveness of the proposed method.

## Introduction

Nowadays, the control of non-linear systems is a topic that continues to attract many researchers because of its widespread applications. In many practical cases, the challenge of this topic is that its mathematical model is poorly known or uncertain (Liu et al., 2011). Furthermore, non-linear systems are susceptible to internal and external disturbances (Li et al., 2018). Therefore, in recent years, some studies have used neural networks (NNs) to approximate non-linear functions (Zhou and Zhang, 2015; Han, 2018). Some studies combined a neural network and other methods to achieve better control performance, such as proportional-integral-derivative (PID) NNs, fuzzy NNs, and sliding mode NNs (Zou et al., 2011; Zhou and Zhang, 2015; Lin and Le, 2017a; Zhao et al., 2018; Wang et al., 2019). Neural networks enable large-scale concurrent computing, processing, and adaptive weight adjustment, and they are simple and convenient (Prieto et al., 2016). Recently, many studies use NNs to address control problems, system identification and prediction problems. In 2013, Li et al. developed an optical-interference pattern-sensing method and neural-network classification for pretesting gap mura on thin-film transistor liquid crystal displays (Li et al., 2013). In 2017, Sun and Pan developed a reliable neural-network to control non-affine non-linear systems (Sun and Pan, 2017). In 2018, Wang et al. presented a memristor-based artificial neural network to predict house prices (Wang et al., 2018). However, neural networks require a considerable amount of computational resources, there is the risk of overfitting, and the architecture must be defined (Tu, 1996).

The concept of a cerebellar-model articulation controller (CMAC) was first proposed by Albus (1975). It is a type of neural network that uses a model of the mammalian cerebellum (associative memory). It addresses the problems of fast-growing size and the learning difficulties that are inherent to current neural networks. Several studies showed that, for applications that require online learning, CMACs perform better than simple neural networks (Lin and Chen, 2009; Guan et al., 2019). Since CMACs have a non-fully connected perceptron-like associative-memory network with overlapping receptive fields, they have fast learning performance, and its computation is simple. Contrarily, neural networks have a fully connected perceptron; therefore, all weights are updated during each learning cycle, so the learning capacity for a neural network is essentially global in nature and slow (Lin and Chen, 2009). The main advantages of CMACs over NNs, MLPs, and RBFNs are fast learning, simple computation, and good generalization capability (Lin et al., 2013). Recent studies have proposed some modified CMACs, such as function-link, self-organizing, and type-2 fuzzy CMACs that have better performance. In 2016, Lin et al. proposed a type-2 fuzzy CMAC for an adaptive filter (Lin et al., 2016). In 2017, Lin and Le used a wavelet CMAC to control non-linear systems (Lin and Le, 2017b). In 2018, Tsao et al. proposed the use of a deep CMAC for an adaptive noise-cancellation system (Tsao et al., 2018). A conventional CMAC also has some disadvantages, such as it is difficult to determine a suitable network size and to select the initial membership functions (MFs) to achieve the best performance (Lin and Chen, 2009). It is particularly difficult when the network has many inputs, and each input has a large range.

This study proposes a new method with a structure that includes a neural network connected in series with a CMAC. All inputs to the neural network reduce quantity and range. The outputs for the NN feed into the CMAC to compute the final outputs. This proposed network structure is referred to as a hybrid neural-network–CMAC (HNNCMAC). It is used to control dynamic time-varying plants. The motivation behind a cascade of two architectures was to allow for the inputs into the CMAC structure to be small, avoiding the difficulty in selecting a suitable network size and the initial membership functions. In the CMAC structure, the number of neurons in receptive-field spaces is increasing exponentially by the number of neurons in input space. Our proposed HNNCMAC controller using the NN to reduce the inputs for the CMAC, and then the structure of the modified CMAC in our proposed network will be smaller than the conventional CMAC. It is more effective when the number of inputs is large. In comparison with previous modified CMAC neural networks, as in Lin and Le (2017b) and Lin et al. (2018a,b), the proposed HNNCMAC has some advantages, such as small CMAC structure, and ease in designing network size and initial membership functions. The main contributions of this study are: (1) the successful design of an adaptive HNNCMAC system for the control of non-linear dynamic time-varying plants; (2) adaptive laws are derived using the steepest-descent gradient approach and a back-propagation algorithm; (3) input range and quantity in the proposed CMAC could be reduced by the NN pre-controller; (4) the stability of the proposed method is guaranteed by Lyapunov analysis; and (5) the method could be used for non-linear control problems, as proven by the results of numerical simulations.

The remaining sections of the paper are organized as follows. The design of the HNNCMAC is presented in section Methods. Section 3 presents the simulation results for controlling the dynamic time-varying plant. Section 4 provides the discussion. Finally, the conclusion is given in Section 5.

## Methods

### HNNCMAC Structure

The structure of hybrid NNCMAC includes a neural network that is connected in series with a CMAC. The NN reduces the range and the quantity of the input, and the output from the NN becomes the input for the CMAC to compute the final control output. Figure 1 shows the structure of HNNCMAC, which has seven spaces: input, hidden NN, output NN, association, receptive, weight-memory, and final-output spaces. These are described below.

(1) Input space *I*: There is no computation in this space. Input data from the dataset are fed into this space and directly transferred to the next space.

(2) Hidden NN space *A*: each node in this space performs a multiplication between vector input $I={\left[{I}_{1},\text{}{I}_{2},\dots ,{I}_{{n}_{l}}\right]}^{T}$ and hidden NN weight matrix $h=\left[\begin{array}{c}{h}_{1}\\ {h}_{2}\\ \vdots \\ {h}_{{n}_{a}}\end{array}\right]=\left[\begin{array}{l}\text{}{h}_{11},{h}_{12},\dots ,{h}_{1{n}_{l}}\\ \text{}{h}_{21},{h}_{22},\dots ,{h}_{2{n}_{l}}\\ \text{}\vdots \text{}\ddots \text{}\vdots \\ {h}_{{n}_{a}1},\text{}{h}_{{n}_{a}2},\dots ,{h}_{{n}_{a}{n}_{l}}\end{array}\right]$; after that, they are added with a bias $\alpha ={\left[{\alpha}_{1},{\alpha}_{2},\dots ,{\alpha}_{{n}_{a}}\right]}^{T}$, where *n*_{a} is the number of nodes in the hidden NN space and *n*_{l} is the number of nodes in the input space; *h*_{al} is the connecting weight from the *a*^{th}, *a* = 1, …, *n*_{a}, neuron in space * A* to the

*l*

^{th},

*l*= 1, …,

*n*

_{l}neuron in space

*.*

**I**For example, in this space, the output from the *x*^{th} node is derived as

where α_{a} is the bias of the *x*^{th} neuron.

Then, the output from this space is expressed as $A={\left[{A}_{1},{A}_{2},\dots ,{A}_{{n}_{a}}\right]}^{T}$.

(3) Output NN space B: This is the output from the neural-network space and it is the input for the CMAC. This layer performs a multiplication between the vector in the previous layer $A={\left[{A}_{1},{A}_{2},\dots ,{A}_{{n}_{a}}\right]}^{T}$ and output NN weight matrix $v=\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ \vdots \\ {v}_{{n}_{i}}\end{array}\right]=\left[\begin{array}{c}{v}_{11},{v}_{12},\dots ,{v}_{1{n}_{a}}\\ {v}_{21},{v}_{22},\dots ,{v}_{2{n}_{a}}\\ \text{}\vdots \text{}\ddots \text{}\vdots \\ {v}_{{n}_{i}1},{v}_{{n}_{i}2},\dots ,{v}_{{n}_{i}{n}_{a}}\end{array}\right]$; then, it adds to a bias $b={\left[{b}_{1},\text{}{b}_{2},\dots ,{b}_{{n}_{i}}\right]}^{T}$. To limit the input range for the CMAC, the final result in this space is a tangent sigmoid function. The output for the *i*^{th} node is

where $ne{t}_{{B}_{i}}=\left[{v}_{i1},{v}_{i2},\dots ,{v}_{i{n}_{a}}\right]\text{}\left[\begin{array}{c}{A}_{1}\\ {A}_{2}\\ \dots \\ {A}_{{n}_{a}}\end{array}\right]+{b}_{i}$; *v*_{ia} is the connecting weight from the *i*^{th}, *i* = 1, …, *n*_{i}, neuron in space * B* to the

*a*

^{th}neuron in space

*.*

**A**The output for this space is expressed as $B={\left[{B}_{1},{B}_{2},\dots ,{B}_{{n}_{i}}\right]}^{T}$.

(4) Association space *F*: In this space, several elements are accumulated as a block. The membership grades in each block are calculated using input variables *B*_{i} from the previous space and the Gaussian MFs.

where *m*_{ijk} is the mean; σ_{ijk} is the variance of the *k*^{th} block in the *j*^{th} layer that corresponds to the *i*^{th} input variable; *n*_{j} is the number of layers; and *n*_{k} is the number of blocks. Therefore, the output from this space is the vector association.

(5) Receptive-field space ϕ : This layer performs the mapping that relates each location of *F* to generate the receptive-field vector:

$\varphi ={\left[{\varphi}_{11},...,{\varphi}_{1{n}_{k}},...,{\varphi}_{{n}_{j}1},...,\text{}{\varphi}_{{n}_{j}{n}_{k}}\right]}^{T}\in {\Re}^{{n}_{j}{n}_{k}}$

where

The mechanism for mapping 2D input is shown in Figure 2.

(6) Weight-memory space *W:* Each element of ϕ is mapped with a specific adjustable value for *W* that is expressed as

where *w*_{jkq} is the connecting weight for the *q*th, *q* = 1, …, *n*_{q}, final output and the receptive-field space for the *j*th layer and *k*th block.

(7) Final output space *O*: This space performs the product operation of receptive-field space ϕ and weight-memory space *W* to obtain the final output for the HNNCMAC, which is expressed as

The initial parameters for the HNNCMAC are chosen randomly and updated by some adaptive laws, which are derived using the steepest-descent gradient approach and a back-propagation algorithm, as described in the following section. The computational complexity using the Big-O notation is Big-O = O(T^{*}[max(*p*_{1}, *p*_{2}, …, *p*_{nj}) +$\prod _{j=1}^{{n}_{j}}}{p}_{j$+*n*_{l}*n*_{i}*n*_{a}]), where T is the running time, *p*_{j} is the number of membership functions in association space.

### HNNCMAC Parameters—Learning Algorithm

The scheme for the HNNCMAC system is shown in Figure 3. The goal of control system is to generate control signal û_{HNNCMAC}(*t*), which forces the output of dynamic time-varying plant *y*(*t*) to track reference signal *y*_{d}(*t*). The flowchart of the HNNCMAC system is shown in Figure 4, in which input range and quantity in the proposed CMAC could be reduced by the NN pre-controller. Therefore, it can reduce the number of neurons in receptive-field spaces and the weight-memory space; then, the structure of CMAC can be significantly reduced.

The high-order sliding mode from Manceur et al. (2012) and Zheng et al. (2014) is used to improve the performance of the control system

where λ and *n* are the slope and the order of the sliding surface, respectively. Both λ and *n* are positive constants. Tracking error *e*(*t*) is defined as:

where *y*_{d} and *y* are reference signal and system output, respectively.

Taking the derivative of Equation (7)

where $K={\left[(n-1)\lambda ,\frac{(n-1)(n-2)}{2}{\lambda}^{2},...,{\lambda}^{n-1}\right]}^{T}\in {\Re}^{n-1}$ is the positive gain vector and $e\left(t\right)={\left[{e}^{\left(n-1\right)}\left(t\right),{e}^{\left(n-2\right)}\left(t\right),\mathrm{\dots},\dot{e}\left(t\right)\right]}^{T}\in {\Re}^{n}$ is the tracking error vector.

If the values for *n* and λ correspond to the coefficients of a Hurwitz polynomial, then $\underset{k\to \infty}{\mathrm{\text{lim}}}e(t)=0$.

The structure of the HNNCMAC has seven variables that are updated as: *w*_{jkq}, *m*_{ijk}, σ_{ijk}, *b*_{i}, *v*_{ia}, α_{a} and *h*_{al}. The Lyapunov cost function is chosen as $V(t)=\frac{1}{2}{s}^{2}(t)$, so $\stackrel{.}{V}(t)=s(t)\u1e61(t)$. An online learning gradient descent algorithm was used to minimize $\stackrel{.}{V}(t)$. Therefore, online tuning laws for the HNNCMAC parameters are given by the following equations:

where ${\u0175}_{jkq},{\widehat{m}}_{ijk},{\widehat{\sigma}}_{ijk},{\widehat{b}}_{i},{\widehat{v}}_{ia},{\widehat{\alpha}}_{a},{\u0125}_{al}$ are the estimation of the optimal values for parameters *w*_{jkq}, *m*_{ijk}, σ_{ijk}, *b*_{i}, *v*_{ia}, α_{a}, *h*_{al}; and $\Delta {\u0175}_{jkq},\Delta {\widehat{m}}_{ijk},\Delta {\widehat{\sigma}}_{ijk},\Delta {\widehat{b}}_{i},\Delta {\widehat{v}}_{ia},\Delta {\widehat{\alpha}}_{a},\Delta {\u0125}_{al}$ are the estimation of the optimal values for Δ*w*_{jkq}, Δ*m*_{ijk}, Δσ_{ijk}, Δ*b*_{i}, Δ*v*_{ia}, Δα_{a}, Δ*h*_{al}.

The updating term in Equations (10–16) is obtained by back-propagation by using the following chain rules:

where ${\widehat{\eta}}_{m},{\widehat{\eta}}_{\sigma},{\widehat{\eta}}_{b},{\widehat{\eta}}_{v},\text{}{\widehat{\eta}}_{a},{\widehat{\eta}}_{h}$ are the positive learning rates for the adaptive laws.

Using this online tuning parameter, the HNNCMAC can adjust the parameters online to achieve desired performance.

Proof of the algorithm convergence:

The Lyapunov cost function is defined as

Therefore, the rate of change for Equation (24) is

By using the Taylor expansion, the difference in the sliding hyperplane is

From Equation (17), it can be seen that

By using Equations (27) and (17), Equation (26) is rewritten as

By using Equation (28), Equation (25) is rewritten as

From Equation (29), if the learning rate ${\widehat{\eta}}_{w}$ is $0<{\widehat{\eta}}_{w}<\frac{2}{{\xi}^{2}}$, then term Δ*V*(*t*) is negative, and Lyapunov function *V*(*t*) > 0. Therefore, the convergence of the system is guaranteed by Lyapunov stability. A similar method is used to prove the stability of learning rates ${\widehat{\eta}}_{m},{\widehat{\eta}}_{\sigma},{\widehat{\eta}}_{b},{\widehat{\eta}}_{v},\text{}{\widehat{\eta}}_{a},{\widehat{\eta}}_{h}$.

## Simulation of Results

In this section, the performance of the proposed HNNCMAC is investigated. Three examples in control of the dynamic time-varying plants are considered. The dynamic time-varying plants are the plants that contain the parameters varying with time.

**Example 1:** Controlling a dynamic time-varying plant borrowed from Narendra and Parthasarathy (1990) and Abiyev and Kaynak (2010), which is described by the difference equation

where *u(t)* is the control signal from the proposed HNNCMAC; *y(t), y(t-1)*, and *y(t-2)* are measurable plant output, one-step delayed plant output, and two-step delayed plant output, respectively; ε(*t*) = 0.1 sin(π*t*) and Δ*y*(*t*) = 0.1 *y*(*t*), respectively, denote the external disturbances and the system uncertainties; *f*[*y*(*t* − 1), *y*(*t* − 2)] is the previous plant output function, which is given as $f\left[y(t-1),y(t-2)\right]=\frac{y(t-1)y(t-2)(y(t-1)+2.5)}{(1+y{(t-1)}^{2}+y{(t-2)}^{2})}$

The desired trajectory signal *y*_{d}(*t*) is given as

The desired trajectory signal and the system outputs for the dynamic time-varying plant are shown in Figure 5. Control signals and tracking errors are shown in Figures 6, 7, respectively. These results show that the HNNCMAC allows a time-varying plant to follow a specified trajectory signal. In terms of the performance of the control system, Table 1 shows a comparison of the root mean square error (RMSE) for the proposed method and other methods.

**Figure 5**. Comparison of system outputs between proposed HNNCMAC and other control methods for Example 1.

**Figure 6**. Comparision of control signals between proposed HNNCMAC and other control methods for Example 1.

**Figure 7**. Comparison of tracking errors between proposed HNNCMAC and other control methods for Example 1.

**Example 2**: Controlling a dynamic time-varying plant borrowed from Zhang et al. (1998) and Abiyev and Kaynak (2010), which is described by the difference equation

where *u(t)* is the control signal from the proposed HNNCMAC; *y(t), y(t-1)*, and *y(t-2)* are measurable plant output, one-step delayed plant output, and two-step delayed plant output, respectively; ε(*t*) = 0.1 sin(π*t*) and Δ*y*(*t*) = 0.1 *y*(*t*), respectively, denote the external disturbances and the system uncertainties; *f*[*y*(*t* − 1), *y*(*t* − 2)] is the previous plant output function, which is given as *f*[*y*(*t* − 1), *y*(*t* − 2)] = *b*_{1}(*t*)*y*(*t* − 1)+*b*_{2}(*t*)*y*(*t* − 2); *b*_{0}(*t*), *b*_{1}(*t*), and *b*_{2}(*t*) are the time-varying function, which are given as ${b}_{0}(t)=-\frac{{t}^{2}}{1+{a}_{1}(t)t+{a}_{2}(t){t}^{2}}$; ${b}_{1}(t)=\frac{2+{a}_{1}(t)t}{1+{a}_{1}(t)t+{a}_{2}(t){t}^{2}}$; ${b}_{2}(t)=-\frac{1}{1+{a}_{1}(t)t+{a}_{2}(t){t}^{2}}$; *a*_{1}(*t*) and *a*_{2}(*t*) are the time-varying plant parameters, which are given as

The desired trajectory signal is given as

Figure 8 shows the change in the time-varying parameters. The desired trajectory signal and the outputs for the dynamic time-varying plant are shown in Figure 9. Control signals and tracking errors are shown in Figures 10, 11, respectively. Simulation results showed that the HNNCMAC allows a time-varying plant to follow the reference signal, even if there are abrupt changes in parameters *a*_{1} and *a*_{2}. Table 1 shows a comparison of the RMSE for the proposed method and other methods.

**Figure 9**. Comparison of system outputs between proposed HNNCMAC and other control methods for Example 2.

**Figure 10**. Comparision of control signals between proposed HNNCMAC and other control methods for Example 2.

**Figure 11**. Comparison of tracking errors between proposed HNNCMAC and other control methods for Example 2.

**Example 3**: Controlling a dynamic time-varying plant to follow variable frequency signals.

This example uses the same dynamic time-varying plant that is described in Example 2. The desired trajectory signal is the variable frequency signal:

and

where *square* and *sin* are the square function and the sinusoidal function, respectively, and *k*_{t} is the parameter for changing the signal frequency, which changes by time as follows:

By using the square signal with varying frequency in Equation (35) as the desired trajectory, the reference signal and the system outputs for the time-varying plant are shown in Figure 12. The control signals and tracking errors for this case are shown in Figures 13, 14, respectively. Figure 15 shows the reference signals and system outputs for the time-varying plant when the desired trajectory is the sinusoidal signal with varying frequency in Equation (36). The control signals are plotted in Figure 16, and the tracking errors are plotted in Figure 17. Simulation results for the sinusoidal function reference showed that, at the beginning of the control process, the proposed controller could control the system well, but as frequency increases with time, as well as when the time-varying plant parameters suddenly change, the tracking error also rises due to the controller needing time to adapt to these changes. As shown in Figures 7, 11, 14, 17, there were some rapid variation errors at the time the reference signals or the time-varying plant parameters suddenly changed. However, our proposed controller showed better ability to adapt to these changes, and the tracking error using our proposed HNNCMAC could quickly converge better than other control methods can. The external disturbances and the system uncertainties in this case are chosen as ε(*t*) = 0.8 sin(π*t*) and Δ*y*(*t*) = 0.3 *y*(*t*), respectively. A comparison of the RMSE for the following variable-frequency signal is shown in Table 1.

**Figure 12**. Comparison of system outputs between proposed HNNCMAC and other control methods for square signal reference with varying frequency.

**Figure 13**. Comparision of control signals between proposed HNNCMAC and other control methods for square signal reference with varying frequency.

**Figure 14**. Comparison of tracking errors between proposed HNNCMAC and other control methods for square signal reference with varying frequency.

**Figure 15**. Comparison of system outputs between proposed HNNCMAC and other control methods for sinusoidal signal reference with varying frequency.

**Figure 16**. Comparision of control signals between proposed HNNCMAC and other control methods for sinusoidal signal reference with varying frequency.

**Figure 17**. Comparison of tracking errors between proposed HNNCMAC and other control methods for sinusoidal signal reference with varying frequency.

## Discussions

For this control problems, the HNNCMAC structure had three neurons in the input space, 10 neurons in the hidden space, and two neurons in output space NN. The association space had two layers, each of which with five Gaussian membership functions. The input for the HNNCMAC control system was the output from the sliding hyperplane, its one-step delayed, and its derivatives, *s*(*t*), *s*(*t* − 1), and ṡ(*t*). Term *s*(*t* − 1) is used to obtain more information about the time-varying plants. The initial parameters for the Gaussian function were *m*_{11k} = *m*_{21k} = *m*_{31k} = [−0.5 − 0.3 0 0.3 0.5], *m*_{12k} = *m*_{22k} = *m*_{32k} = [−0.45 − 0.35 − 0.5 0.25 0.45], and σ_{ijk} = 0.4. The parameters for the sliding surface were *n* = *3* and λ = 0.2. All learning rates were 0.01, and sampling time was 0.01 s. Using the adaptation laws in Equations (10–23), the controller parameters can be updated online to adapt to the changes in the control system. The examples have demonstrated that our proposed controller can address well the external disturbances and the system uncertainties. The convergence of our proposed controller is guaranteed by Lyapunov stability analysis approach in Equation (29). The average RMSE for all examples between the proposed HNNCMAC, the multilayer perceptron NN (MPNN), the conventional CMAC, the interval type-2 Petri CMAC (IT2PCMAC) (Le et al., 2019), and the type-2 Takagi–Sugeno–Kang fuzzy neural system (T2TSKFNS) (Abiyev and Kaynak, 2010) are shown in Table 1. It is obvious that the proposed controller was using the NN to reduce the inputs for the CMAC; then, the structure of the modified CMAC in our proposed network would be smaller than that of a conventional CMAC. It is more effective when the number of inputs is large. Table 1 shows that the proposed controller has a small computation time than a conventional CMAC, due to our modified CMAC structure was using the NN pre-controller to reduce the computation complexity of the CMAC. Moreover, the NN output used the tangent sigmoid function to limit the output from [−1 1]. Therefore, it is easy to design the network size and the initial membership functions in our modified CMAC controller. As shown in Table 1, the proposed HNNCMAC algorithm could achieve better control performance with the smallest RMSE than other controllers could. In Appendix A, Tables A–D show analysis of the difference between our proposed controller and other controllers using the *t*-Test statistical approach. In all examples, statistical results showed that the *P-value* was lower than the alpha level (α = 0.05). Thus, we can conclude that the RMSE results of our proposed controller had statistically significant difference with other controllers. Therefore, the superiority of the proposed controller was illustrated. Some real-world applications, which have large inputs, can apply this proposed network to reduce the network structure such as medical diagnosis problems, classification problems, image processing problems, etc. Choosing the parameters for the sliding surface affects much of the control performance. This study used the try-and-error approach to obtain suitable parameters. Further studies should investigate the estimation method to estimate these parameters to achieve better control performance.

## Conclusions

This paper proposed an HNNCMAC that is used to control a non-linear dynamic time-varying plant. The main contributions of this study are that it demonstrated a method to control a non-linear dynamic time-varying plant; the HNNCMAC structure uses adaptive laws to adjust parameters online; input range and quantity in the proposed CMAC can be reduced by the NN pre-controller, and it makes it easy to design network size and initial membership functions; the stability of the proposed method is guaranteed by Lyapunov analysis and the numerical-simulation results for controlling a time-varying plant, showing the superiority of the proposed method over existing methods. Moreover, our proposed controller is simple to design and implement, and can be applied to other fields such as system identification, classification, and prediction. Our future work will apply the optimal algorithm to optimize parameters in the sliding surface and learning rates in adaptive laws to achieve better control performance.

## Data Availability Statement

All datasets generated/analyzed for this study are included in the article/Supplementary Material.

## Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

## Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2019-2018-0-01423) supervised by the IITP (Institute for Information & communications Technology Promotion).

## Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

## Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnins.2020.00695/full#supplementary-material

## References

Abiyev, R. H., and Kaynak, O. (2010). Type 2 fuzzy neural structure for identification and control of time-varying plants. *IEEE Trans. Ind. Electron.* 57, 4147–4159. doi: 10.1109/TIE.2010.2043036

Albus, J. S. (1975). A new approach to manipulator control: the cerebellar model articulation controller (CMAC). *J. Dyn. Syst. Meas. Control* 97, 220–227. doi: 10.1115/1.3426922

Guan, J. S., Hong, S. J., Kang, S. B., Zeng, Y., Sun, Y., and Lin, C. M. (2019). Robust adaptive recurrent cerebellar model neural network for non-linear system based on GPSO. *Front. Neurosci.* 13:390. doi: 10.3389/fnins.2019.00390

Han, Y. Q. (2018). Adaptive tracking control of nonlinear systems with dynamic uncertainties using neural network. *Int. J. Syst. Sci.* 49, 1391–1402. doi: 10.1080/00207721.2018.1453955

Le, T. L., Lin, C.M., and Huynh, T. T. (2019). Interval type-2 Petri CMAC design for 4D chaotic system“, in: *2019 International Conference on System Science and Engineering (ICSSE)* (Dong Hoi: IEEE), 420–424. doi: 10.1109/ICSSE.2019.8823251

Li, D. P., Liu, Y. J., Tong, S., Chen, C. P., and Li, D. J. (2018). Neural networks-based adaptive control for nonlinear state constrained systems with input delay. *IEEE Trans. Cybernetics* 49, 1249–1258. doi: 10.1109/TCYB.2018.2799683

Li, T. Y., Tsai, J. Z., Chang, R. S., Ho, L. W., and Yang, C. F. (2013). Pretest gap mura on TFT LCDs using the optical interference pattern sensing method and neural network classification. *IEEE Trans. Ind. Electron.* 60, 3976–3982. doi: 10.1109/TIE.2012.2207658

Lin, C. M., and Chen, T. Y. (2009). Self-organizing CMAC control for a class of MIMO uncertain nonlinear systems. *IEEE Trans. Neural Netw.* 20, 1377–1384. doi: 10.1109/TNN.2009.2013852

Lin, C. M., Huynh, T. T., and Le, T. L. (2018a). Adaptive TOPSIS fuzzy CMAC back-stepping control system design for nonlinear systems. *Comput.* 23, 6947–6966. doi: 10.1007/s00500-018-3333-4

Lin, C. M., La, V. H., and Le, T. L. (2018b). DC–DC converters design using a type-2 wavelet fuzzy cerebellar model articulation controller. *Neural Comput. Appl.* 32, 2217–2229. doi: 10.1007/s00521-018-3755-z

Lin, C. M., and Le, T. L. (2017a). PSO-self-organizing interval type-2 fuzzy neural network for antilock braking systems. *Int. J. Fuzzy Syst.* 19, 1362–1374. doi: 10.1007/s40815-017-0301-6

Lin, C. M., and Le, T. L. (2017b). WCMAC-based control system design for nonlinear systems using PSO. *J. Intell. Fuzzy Syst.* 33, 807–818. doi: 10.3233/JIFS-161999

Lin, C. M., Lin, M. H., and Yeh, R. G. (2013). Synchronization of unified chaotic system via adaptive wavelet cerebellar model articulation controller. *Neural Comput. Appl.* 23, 965–973. doi: 10.1007/s00521-012-1021-3

Lin, C. M., Yang, M. S., Chao, F., Hu, X. M., and Zhang, J. (2016). Adaptive filter design using type-2 fuzzy cerebellar model articulation controller. *IEEE Trans. Neural Netw. Learn. Syst.* 27, 2084–2094. doi: 10.1109/TNNLS.2015.2491305

Liu, K., Sun, D., Yao, Y., and Balakrishnan, V. (2011). “A new approach to stabilization of uncertain nonlinear systems”, in *2011 IEEE International Symposium on Computer-Aided Control System Design (CACSD)* (Denver, CO: IEEE), 228–233. doi: 10.1109/CACSD.2011.6044540

Manceur, M., Essounbouli, N., and Hamzaoui, A. (2012). Second-order sliding fuzzy interval type-2 control for an uncertain system with real application. *IEEE Trans. Fuzzy Syst.* 20, 262–275. doi: 10.1109/TFUZZ.2011.2172948

Narendra, K. S., and Parthasarathy, K. (1990). Identification and control of dynamical systems using neural networks. *IEEE Trans. Neural Netw.* 1, 4–27. doi: 10.1109/72.80202

Prieto, A., Prieto, B., Ortigosa, E. M., Ros, E., Pelayo, F., Ortega, J., et al. (2016). Neural networks: an overview of early research, current frameworks and new challenges. *Neurocomputing* 214, 242–268. doi: 10.1016/j.neucom.2016.06.014

Sun, T., and Pan, Y. (2017). Adaptive control for nonaffine nonlinear systems using reliable neural network approximation. *IEEE Access* 5, 23657–23662. doi: 10.1109/ACCESS.2017.2763628

Tsao, Y., Chu, H. C., Fang, S. H., Lee, J., and Lin, C. M. (2018). Adaptive noise cancellation using deep cerebellar model articulation controller. *IEEE Access* 6, 37395–37402. doi: 10.1109/ACCESS.2018.2827699

Tu, J. V. (1996). Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. *J. Clin. Epidemiol.* 49, 1225–1231. doi: 10.1016/S0895-4356(96)00002-9

Wang, J., Hu, S., Zhan, X., Luo, Q., Yu, Q., Liu, Z., et al. (2018). Predicting house price with a memristor-based artificial neural network. *IEEE Access* 6, 16523–16528. doi: 10.1109/ACCESS.2018.2814065

Wang, S., Chen, S., Ji, W., and Liu, K. (2019). Quantized sampled-data control for TS fuzzy system using discontinuous LKF approach. *Front. Neurosci.* 13:372. doi: 10.3389/fnins.2019.00372

Zhang, C. J., Shao, C., and Chai, T. Y. (1998). Indirect adaptive control for a class of linear time-varying plants. *IEE Proc. Control Theory Appl.* 145, 141–149. doi: 10.1049/ip-cta:19981847

Zhao, J., Lin, C. M., and Chao, F. (2018). Wavelet fuzzy brain emotional learning control system design for MIMO uncertain nonlinear systems. *Front. Neurosci.* 12:918. doi: 10.3389/fnins.2018.00918

Zheng, E. H., Xiong, J. J., and Luo, J. L. (2014). Second order sliding mode control for a quadrotor UAV. *ISA Transactions* 53, 1350–1356. doi: 10.1016/j.isatra.2014.03.010

Zhou, M., and Zhang, Q. (2015). Hysteresis model of magnetically controlled shape memory alloy based on a PID neural network. *IEEE Trans. Mag.* 51, 7301504–7301507. doi: 10.1109/TMAG.2015.2434933

Keywords: neural network, cerebellar model articulation controller, time-varying plants, non-linear system, adaptive control

Citation: Le T-L, Huynh T-T, Hong S-K and Lin C-M (2020) Hybrid Neural Network Cerebellar Model Articulation Controller Design for Non-linear Dynamic Time-Varying Plants. *Front. Neurosci.* 14:695. doi: 10.3389/fnins.2020.00695

Received: 27 January 2020; Accepted: 09 June 2020;

Published: 28 July 2020.

Edited by:

Vitor Engracia Valenti, São Paulo State University, BrazilReviewed by:

Grienggrai Rajchakit, Maejo University, ThailandMing-Feng Yeh, Lunghwa University of Science and Technology, Taiwan

Copyright © 2020 Le, Huynh, Hong and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sung-Kyung Hong, skhong@sejong.ac.kr