A Reduced-Order RNN Model for Solving Lyapunov Equation Based on Efficient Vectorization Method

With the trend of electronization of the power system, a traditional serial numerical algorithm is more and more difficult to adapt to the demand of real-time analysis of the power system. As one of the important calculating tasks in power systems, the online solution of Lyapunov equations has attracted much attention. A recursive neural network (RNN) is more promising to become the online solver of the Lyapunov equation due to its hardware implementation capability and parallel distribution characteristics. In order to improve the performance of the traditional RNN, in this study, we have designed an efficient vectorization method and proposed a reduced-order RNN model to replace the original one. First, a new vectorization method is proposed based on the special structure of vectorized matrix, which is more efficient than the traditional Kronecker product method. Second, aiming at the expanding effect of vectorization on the problem scale, a reduced-order RNN model based on symmetry to reduce the solution scale of RNN is proposed. With regard to the accuracy and robustness, it is proved theoretically that the proposed model can maintain the same solution as that of the original model and also proved that the proposed model is suitable for the Zhang neural network (ZNN) model and the gradient neural network (GNN) model under linear or non-linear activation functions. Finally, the effectiveness and superiority of the proposed method are verified by simulation examples, three of which are standard examples of power systems.


INTRODUCTION
With the trend of the electronic power system, the scale of system computing is increasing day by day, while the demand of real-time analysis and calculation in the process of system operation remains unchanged. Traditional serial algorithms cannot solve this contradiction well, so various parallel algorithms and distributed methods appear successively. In power system state estimation,  have used the SuperLU_MT solver to estimate the state of the actual power grid, making full use of the parallel characteristics of multicore and multi-thread solver. Liu Z. et al. (2020) have fully explored the parallelism in the calculation of continuous power flow and applied the continuous Newton method power flow model to realize the parallel solution algorithm of continuous power flow based on GPU in large scale and multiple working conditions. Moreover, a novel distributed dynamic event-triggered Newton-Raphson algorithm is proposed to solve the double-mode energy management problem in a fully distributed fashion (Li et al., 2020). Similarly, Li Y. et al. (2019) proposed an event-triggered distributed algorithm with some desirable features, namely, distributed execution, asynchronous communication, and independent calculation, which can solve the issues of day-ahead and real-time cooperative energy management for multienergy systems. Given that software algorithms are essentially run by hardware, implementing functions directly from hardware is also an option for realtime computing. For example, Hafiz et al. (2020) proposed a real-time stochastic optimization of energy storage management using deep learning-based forecasts for residential PV applications, where the key of the real-time computation is the hardware controller. It is worth pointing out that compared with the aforementioned methods, the neural dynamics method has greater potential in the field of real-time calculation of power systems (Le et al., 2019), and its time constant can reach tens of milliseconds (Chicca et al., 2014) because of its parallel distribution characteristics and the convenience of hardware implementation.
The Lyapunov equation is widely used in some scientific and engineering fields to analyze the stability of dynamic systems He and Zhang, 2017;Liu J. et al., 2020). In addition, the Lyapunov equation plays an important role in the controller design and robustness analysis of non-linear systems (Zhou et al., 2009;Raković and Lazar, 2014). In the field of power systems, the balanced truncation method, controller design, and stability analysis are also inseparable from the solution of the Lyapunov equation (Zhao et al., 2014;Zhu et al., 2016;Shanmugam and Joo, 2021). Therefore, many solving algorithms have been proposed to solve the Lyapunov equation. For example, Bartels and Stewart proposed the Bartels-Stewart method (Bartels and Stewart, 1972), which is a numerically stable solution. Lin and Simoncini (Lin and Simoncini, 2013) proposed the minimum residual method for solving the Lyapunov equation. Stykel (2008) used the low-rank iterative method to solve the Lyapunov equation and verified the effectiveness of the method through numerical examples. However, the efficiency of these serial processing algorithms is not high in large-scale applications and related real-time processing (Xiao and Liao, 2016).
Recently, due to its parallelism and convenience of hardware implementation, recurrent neural networks have been proposed and designed to solve the Lyapunov equation (Zhang et al., 2008;Yi et al., 2011;Yi et al., 2013;Xiao et al., 2019). The RNN mainly includes the Zhang neural network (ZNN) and gradient neural network (GNN) (Zhang et al., 2008). Most of the research studies on RNN focus on the improvement of model convergence. For example, Yi et al. (2013) point out that when solving a stationary or a non-stationary Lyapunov equation, the convergence of the ZNN is better than that of GNN. Yi et al. (2011) used a powersigmoid activation function (PSAF) to build an improved GNN model to accelerate the iterative convergence of Lyapunov equation. In (Xiao and Liao, 2016), the sign-bi-power activation function (SBPAF) is used to accelerate the convergence of the ZNN model for solving the Lyapunov equation and the proposed ZNN model has finite-time convergence, which is obviously better than the previous ZNN and GNN models. In recent years, some studies have considered the noise-tolerant ZNN model. In Xiao et al. (2019), two robust non-linear ZNN (RNZNN) are established to find the solution of the Lyapunov equation under various noise conditions. Different from previous ZNN models activated by the typical activation functions (such as the linear activation function, the bipolar sigmoid activation function, and the power activation function), these two RNZNN models have predefined time convergence in the presence of various noises.
However, both GNN and ZNN need to transform the solution matrix from the matrix form to the vector form through the Kronecker product, which is called vectorization of the RNN model (Yi et al., 2011). The use of the Kronecker product will make the scale of the problem to be solved larger. As the size of the problem increases, the scaling effect of the Kronecker product becomes more obvious. The enlargement effect of the Kronecker product on the model size will not only lead to insufficient memory when the RNN is simulated on software but also make the hardware implementation of the RNN model need more devices and wiring, which increases the volume of hardware, the complexity of hardware production, and the failure rate of hardware. However, no study has discussed the order reduction of the RNN model.
It should be pointed out that the vectorized RNN model needs to be solved using a hardware circuit. However, as the relevant research of the RNN for solving the Lyapunov equation is still in the stage of theoretical exploration and improvement, there are no reports about hardware products of the RNN solver of the Lyapunov equation. Relevant studies (Zhang et al., 2008;Yi et al., 2011;Yi et al., 2013;Xiao and Liao, 2016;Xiao et al., 2019) simulate the execution process of the RNN hardware circuit through the form of software simulation, and this study also adopts this form. It is undeniable that the results of software simulation are consistent with those of hardware implementation. Therefore, the theoretical derivation and simulation results of the RNN in this article and in the literature (Zhang et al., 2008;Yi et al., 2011;Yi et al., 2013;Xiao and Liao, 2016;Xiao et al., 2019) can be extended to the scenarios of hardware implementation.
The RNN is used to solve the Lyapunov equation, and the ultimate goal is to develop an effective online calculation model to solve the Lyapunov equation, so it is of great significance to improve the calculation speed of the RNN. Current studies focus on improving the computational speed of the RNN by improving the convergence of RNN. However, how to efficiently realize vectorization of the RNN model is also a breakthrough to improve the computational efficiency of the RNN method. At present, the Kronecker product is generally used to transform the solution matrix into the vector form (Horn and Johnson, 1991). The Kronecker product actually performs multiple matrix multiplication operations, and the time complexity of multiplying two n×n matrices is O (n^3), so the time complexity of the Kronecker product increases rapidly as the scale increases. This means that the traditional matrix vectorization method based on the Kronecker product still has room for optimization.
In summary, this article proposes an efficient method for vectorizing the RNN model based on the special structure of the vectorized matrix, which is more efficient than the traditional expansion method by the Kronecker product. Aiming at the expanding effect of vectorization on the problem scale, a reduced-order RNN model based on symmetry was proposed for solving the time-invariant Lyapunov equation, and the validity and applicability of the reduced-order RNN model were proved theoretically. The main contributions of this article are as follows. In order to show the contributions of this study more clearly, the logical graph using the RNN model for solving the Lyapunov equation is shown in Figure 1 Table 1.
In Table1, items and numbers correspond to the three steps of In conclusion, Refs (Yi et al., 2011;Yi et al., 2013;Xiao and Liao, 2016;Xiao et al., 2019) focus on constructing a stronger RNN model to improve the convergence and noise-tolerant ability, including using different activation functions and neural networks. However, this study focuses on the vectorization method and the reduced-order RNN model.

PROBLEM FORMULATION AND RELATED WORK Problem Formulation
Consider the following well-known Lyapunov equation (Yunong Zhang and Danchi Jiang, 1995) where A ∈ R n×n is a constant stable real matrix and C ∈ R n×n is a constant symmetric positive-definite matrix. The objective is to find the unknown matrix X(t) ∈ R n×n to make the Lyapunov matrix Eq. 1 hold true. Let X p ∈ R n×n denote the theoretical solution of Eq. 1.
In addition, two of the most relevant works (i.e., GNN and ZNN models) are presented to solve the Lyapunov Eq. 1 in the following.

GNN
According to the principle of GNN (Yi et al., 2011) and combined with the characteristics of Lyapunov equation, a corresponding GNN model can be designed to solve the Lyapunov equation. The design steps are as follows: First, construct an energy function based on norm as follows: where . F means F-norm. The minimum value of the energy function is the solution of the Lyapunov equation. Second, based on the principle of the negative gradient descent of the GNN, the following formula can be constructed: By introducing the adjustable positive parameter γ, the following GNN model can be obtained: where γ > 0, X(t) ∈ R n×n , and X(0) ∈ R n×n is the initial value of X(t).
Finally, the conventional linear GNN (Eq. 4) can be improved into the following non-linear expression by employing a nonlinear activation function array F (·): where F (·): R n×n → R n×n denotes a matrix-valued activation function array of the GNN models. In this study, the bipolar sigmoid activation function (BPAF) is selected as the representative of the non-linear activation function of the GNN model for simulation because of its strong convergence (Yi et al., 2011). The expression of BPAF is as follows: where δ is a constant and δ > 1.

ZNN
First, following Zhang et al.'s design method (Zhang et al., 2002), we can define the following matrix-valued error function to monitor the solution process of Lyapunov Eq. 1: Then in view of the definition of E(t) and the design formula dE(t)/dt −γφ(E(t)), the dynamic equation of the ZNN model for solving the online Lyapunov Eq. 1 is derived as follows: where φ(·): R n×n → R n×n denotes a matrix-valued activation function array of the ZNN models. The definition of γ in the ZNN model is the same as that in the GNN model. In this study, the RNZNN-1 model is selected as the representative of the non-linear activation function of the ZNN model for simulation because of its strong convergence (Xiao et al., 2019). The expression of the nonlinear activation function in the RNZNN-1 model is as follows: where design parameters 0 < η < 1, ω > 1, a 1 > 0, a 2 > 0, a 3 ≥ 0, a 4 ≥ 0, and sign(x) denotes the signum function.

General Method of Vectorizing RNN Model
The RNN model needs to be transformed to the vector form so that it can be used for software simulation (Li X. et al., 2019) and hardware implementation.

Vectorization of the GNN Model
Yi et al. (2011) pointed out that the vectorization of GNN model is as follows: where ⊗ means the Kronecker product. Given X [x ij ] ∈ R n×n , we can vectorize X as a column vector, vec(X) ∈ R n 2 ×1 , which is defined as vec(X) [x 11 , . . . , x 1n , x 21 , . . . , x n1 , . . . , x nn ] T . Since the order of matrix addition and matrix transpose is interchangeable (Cheng and Chen, 2017), Applying this property to Eq. 11, we can get According to Chen and Zhou (2012), the relationship between the matrix transpose and Kronecker product is as follows: Applying this property to Eq. 14, we can get Considering I I T and combining Eqs 11, 12, 14 and 16, we can get

Vectorization of the ZNN Model
The vectorization process of the ZNN model is similar to that of the GNN. Carry out Kronecker product on Eq. 8, and we can get:

Vectorization of the RNN Model
By comparing Eqs 10, 17 and 18, it can be seen that the key of vectorization of the RNN model is to solve A T ⊕ A T . According to Eq. 12, the calculation of A T ⊕ A T can be divided into three steps: where is a diagonal matrix with n rows and n columns. A T ⊗ I is a matrix with n 2 rows and n 2 columns.
where I ⊗ A T is a matrix with n 2 rows and n 2 columns.

3) Add
FIGURE 2 | Circuit schematics which realizes GNN.   According to the previous analysis, no matter how the matrix A is changed, the matrix structure of A T ⊕ A T is fixed. Based on the special structure of A T ⊕ A T , an efficient method for vectorization of the RNN model is proposed in this article. The steps are as follows.
1) Create a matrix with n 2 rows and n 2 columns named K and fill K with the elements of A according to Eq. 19. 2) Fill K with the elements of A according to Eq. 20.
3) Add the corresponding element of A to the diagonal element of K according to Eq. 21.
The vectorization method of the RNN model proposed in this article is still based on the Kronecker product, but the time complexity is greatly reduced. Because the vectorization method proposed in this article replaces matrix multiplication with assignment and addition.

THE REDUCED-ORDER RNN MODEL FOR SOLVING LYAPUNOV EQUATIONS BASED ON SYMMETRY
Since the solution of Eq. 1, X * , is always symmetric, as long as the upper trigonometric elements of X * are solved, the Proportion 55% ZNN F-norm 5.4400e-5 4.0800e-5 5.1745e-4 6.9400e-5 GNN F-norm 9.8582e-4 9.9281e-4 3.8700e-5 8.3900e-5 where K A T ⊕ A T . For the convenience of later discussion, S ∈ R n×n is constructed. Assign the following values to S as follows: Each element of S is the index number of the element of A at the same position. We can expand X p to vecX p ∈ R n 2 ×1 . Assuming that x p 2 and x p n+1 are, respectively, the elements of the 1st row and the n + 1th row of vecX p . Due to the symmetry, x p 2 will be equal to x p n+1 .

Reduce the Column Number of K
If the Kronecker product is directly carried out on Eq. 1, then We can use KvecX −vecC to express Eq. 24. Lan (2017) points out that if A is stable and C is symmetrically positive definite, then the Lyapunov Eq. 1 has a unique symmetric positive definite solution. Therefore, the K matrix of Eq. 24 must be invertible.
Multiply both sides of Eq. 22 by the inverse matrix of K, then we get From Eq. 25, we can see that K −1 vecC is the solution of the Lyapunov Eq. 1, which means K −1 vecC vecX * . Since X * is a symmetric matrix, the differential equations of x 2 (t) and x n+1 (t) are the same. If x 2 (0) and x n+1 (0) are equal, then the time domain trajectories of x 2 (t) and x n+1 (t) are the same, namely, _ x 2 (t) _ x n+1 (t) and x 2 (t) x n+1 (t). Therefore, for Eq. 22, the column n + 1 of K can be added to the second column. Similarly, the same operation of column addition can be performed on the other columns in the symmetric positions. So the column number of K reduces to 0.5(n + 1)n. Eq. 22 becomes Reduce the Row Number of K When the steady state is considered, the differential term of Eq. 26 is 0, and we can get: As mentioned before, if A is stable and C is symmetrically positive definite, then Lyapunov Eq. 1 has a unique symmetric positive definite solution. Therefore, the number of equations should be the same as the number of variables (Cheng and Chen, 2017), namely, the rank of the coefficient matrix of Eq. 27 is equal to 0.5(n + 1)n, which means the row vector set of the coefficient matrix of Eq. 27 is linearly correlated. We can construct the augmented matrix of Eq. 27 and name it as G. If we define the first row of G as the vector α 1 , the second  row as the vector α 2 , and so on, the row n 2 is defined as the vector α n 2 . According to the knowledge of linear algebra, the vector set α 1 , α 2 , . . . , α n 2 is linearly dependent only if at least one of the vectors in the set can be represented linearly by the other vectors.
Let us define the vectors which can be represented linearly by the other vectors as the redundant vectors. Suppose that α n 2 h 1 α 1 + h 2 α 2 + . . . + h n 2 −1 α n 2 −1 where h 1 , h 2 , . . . , h n 2 −1 are real numbers and at least one of them is not equal to 0. Then α n 2 is a redundant vector. As long as the redundant vectors are found out and the equations of their corresponding rows are deleted, a new augmented matrix with full row rank can be obtained. According to the aformentioned analysis, as long as A is stable and C is symmetrically positive definite, then the redundant vectors must exist, which means there are always some vectors that satisfy Eq. 28. However, A and C are independent of each other. Considering there are always some vectors satisfying Eq. 28 in the case of any stable A and any symmetrically positive definite C, there is only one possibility that for some redundant vector, there is another vector that is equal to it, and the row indexes of both of them are symmetric in the matrix S. Only in this way, based on the symmetric characteristics of matrix C, can the redundant vectors always satisfy Eq. 28 when A and C are independent of each other.
For a redundant vector, its own row index and the row index of another vector equal to it can form a pair of indexes. We can use these index pairs to find the redundant vectors and delete the corresponding rows. In general, in an index pair, the equation corresponding to the index whose value is larger is selected for deletion.
We can use K r vec _ X r −γ(K r vecX r + vecC r ) to express Eq. 29.

Reduced-Order GNN Model With Linear Activation Function
Consider a GNN model with linear activation function after vectorization. The formula is as follows: When the steady state is considered, the differential term of Eq. 30 is 0, and Eq. 30 is changed into Eq. 24. From the aforementioned derivation, it can be known that both KvecX −vecC and K r vecX r −vecC r can obtain the solution of Lyapunov Eq. 1. On this basis, an attempt is made to construct a reduced-order GNN model with linear activation function after vectorization, as follows: k 0.5(n+1)n,1 k 0.5(n+1)n,2 ... k 0.5(n+1)n,0.5(n+1)n . .
When the steady state is considered, Eq. 31 is changed into K r vecX r −vecC r , which means the solution of Lyapunov Eq. 1 can be finally obtained by solving Eq. 31.

Reduced-Order RNN Model With Non-Linear Activation Functions
Before every non-linear activation function is introduced into the linear RNN, it will be theoretically proved that their introduction can guarantee the correct convergence of RNN. However, the reduced-order RNN in this article does not change the structure of RNN, but only changes the size of RNN. Because both the problem scale and the specific values of the matrix are generally expressed in symbolic form in the theoretical derivation (Xiao and Liao, 2016;Xiao et al., 2019), and the reduced-order RNN model in this article is still applicable to the relevant theoretical proof of introducing the non-linear activation functions into the linear RNN. In other words, the reduced-order method in this article can be applied to the RNN model with non-linear activation functions.

The Generation of the Reduced-Order RNN Model
The steps for generating the reduced-order RNN model are as follows: Frontiers in Energy Research | www.frontiersin.org February 2022 | Volume 10 | Article 796325 1) S is constructed, and the construction logic is as described above. 2) Considering that the indexes in the symmetric positions of S can form n(n−1) 2 index pairs, we construct a matrix L and fill L with n(n−1) 2 index pairs. It is important to note that all of the elements in the first column of L must be the upper trigonometric elements (excluding diagonal elements) of S. For the convenience of presentation, suppose M is the first column of L, and N is the second column of matrix L.
3) For K and vecC of RNN model, the rows corresponding to the element values of N are deleted. 4) For K and vecC obtained in step 3, add the symmetric columns according to the element values of M and N, and the columns obtained by the addition will replace the columns corresponding to the element values of M, while the columns corresponding to the element values of N will be deleted. For vec _ X(t) and vecX(t), the rows corresponding to the element values of N will be deleted.
It should be pointed out that in mathematical proof, if the order of row reduction and column reduction is exchanged, the correctness of the reduced-order RNN model cannot be proved or another proof method is needed to complete the proof. However, in the case that the mathematical proof has been completed, the order of row reduction and column reduction does not affect the final result, since we know in advance which rows and columns are to be deleted. In the aformentioned steps of generating the reduced-order RNN model, the reason why we carry out step 3 first is that it can reduce the computational amount of the column addition to achieving higher computational efficiency.

The Significance of the Reduced-Order RNN Model
In order to better explain the value and significance of the reduced-order RNN model proposed in this article, the differences before and after the order reduction are shown from the perspectives of software simulation and hardware implementation, respectively. For the convenience of discussion, the GNN is taken as an example to illustrate.

Simulation on the Software
We use the ode45 function of MATLAB to solve the GNN model after vectorization. By comparing Eq. 30 and Eq. 31, it can be seen that the memory requirement of the reduced-order GNN model is much smaller than that of the original GNN model. Therefore, the reducedorder GNN model greatly alleviates the problem of insufficient memory that may occur in the software simulation of the GNN.

Hardware Implementation
When we use the traditional GNN model, the structure of the circuit diagram is shown as Figure 2 (Yi et al., 2011).
When we use the reduced-order GNN model, the structure of the circuit diagram is shown as Figure 2, except that the n 2 in the diagram becomes 0.5(n + 1)n.
So the reduced-order GNN model greatly reduces the number of devices and wiring required for the hardware realization of GNN model, which is conducive to reducing the volume of hardware, the complexity of hardware production, and the failure rate of hardware.

ILLUSTRATIVE VERIFICATION
The simulation examples in this article are all completed on the MATLAB 2013b platform. In this article, the ode45 function of MATLAB is used to simulate the iterative process of RNN (Zhang et al., 2008). The corresponding computing performance is tested on a personal computer with Intel Core i7-4790 CPU @3.2GHz and 8 GB RAM.
Since there are great differences between software and hardware in the principle of realizing the integral function, there will be a big gap between the time cost in simulating the RNN process using software and the time cost in implementing the RNN model using hardware. Considering the research on the RNN model used to solve the Lyapunov equation is still in the stage of theoretical exploration, and has not reached the stage of hardware production for the time being, this article does not discuss the influence of the proposed reduced-order RNN model on the time consuming of RNN.

The Reduced-Order RNN Model for Solving Lyapunov Equation Based on Symmetry
Example 1 Let us consider the Lyapunov Eq. 1 with the following coefficient matrices: where A is similar to Example II in Xiao et al. (2019). However, A and C of Example II in Xiao et al. (2019) do not fit the definition of Eq. 1, so we change A a little bit and set C to be the identity matrix.
In order to demonstrate the advantages of the reduced-order RNN model, this article compares the performance of the reducedorder RNN and the original-order RNN, as shown in Table 2. In Table 2, linearity and non-linearity, respectively, mean the linear activation function and the non-linear activation function. Scale means the row number of vecX or vecX r . Proportion means the row number of vecX r divided by the row number of vecX. The ZNN F-norm and GNN F-norm, respectively, mean A T X + XA + C F at the end of the simulation of ZNN and GNN.
In order to study the effect of order reduction method proposed in this article on the convergence of the RNN model, the F-norm curves of the original-order RNN model and the reduced-order RNN model are drawn, as shown in Figure 3. In Figure 3A, LAF means the ZNN model with linear activation functions. NAF means the ZNN model with the non-linear activation function of Eq. 9. In Figure 3B, LAF means the GNN model with linear activation functions. BPAF means the GNN model with the non-linear activation function of Eq. 6. In both Figure 3A and Figure 3B, F-norm refers to A T X(t) + X(t)A + C F . Example 2 To enlarge the scale of the example, we consider Lyapunov Eq. 1 with the following coefficient matrices: where A is similar to Example III in Xiao et al. (2019). However, A and C of Example III in Xiao et al. (2019) do not fit the definition of Eq. 1, so we change A a little bit and set C to be the identity matrix.
In this example, RNN's model parameters are the same as Example 1. Similar to Example 1, we can get Table 3 and Figure 4. The definitions of all nouns in Table 3 are the same as those in Table 2, and the definitions of all nouns in Figure 4 are the same as those in Figure 3.

Example 3
A 10p10 matrix is randomly generated and then α-shift is applied to the matrix to make it stable (Yang et al., 1993), which is the generation process of A of Example 3 in this article. C is set to be the identity matrix.
In this example, RNN's model parameters are the same as Example 1. Similar to Example 1, we can get Table 4 and Figure 5. The definitions of all nouns in Table 4 are the same as those in Table 2, and the definitions of all nouns in Figure 5 are the same as those in Figure 3.
Based on the information in the aforementioned three tables, we can draw the following conclusions: a) The reduced-order RNN model has a very obvious effect, with the scale reduced by about 33-45%. Moreover, the effect of the reduced-order RNN model becomes more obvious with the increase in the size of the example. According to lim n→∞ 0.5n(n+1) n 2 0.5, it can be seen that when the size of the example is larger, the percentage of the scale decrease is closer to 50%. The reduced-order RNN model not only greatly alleviates the problem of insufficient memory in the software simulation of RNN but also greatly reduces the number of devices and wiring required for the hardware realization of RNN model, which is conducive to reducing the volume of hardware, the complexity of hardware production, and the failure rate of hardware. b) Under different case scales, whether it is ZNN or GNN, whether it is linear activation function or non-linear

Example 4
In order to verify the applicability of neural dynamics method to the power system, the corresponding Lyapunov equation describing system controllability is generated for the IEEE three-machine nine-node system according to the principle of the balanced truncation method in (Zhao et al., 2014). The input signal is the rotor speed deviation and the output signal is the auxiliary stabilizing signal (Zhu et al., 2016). It should be noted that the IEEE standard systems used in this article come from the examples of the PST toolkit (Lan, 2017), and the linearization process of the system is realized by the svm_mgen.m of PST toolkit. We set γ 10. A ∈ R 15×15 and C ∈ R 15×15 of the Lyapunov equation are detailed in Supplementary Material. In this example, the linear ZNN was selected for testing. Similar to Example 1, we can get Table 5 and Figure 6. The definitions of all nouns in Table 5 are the same as those in Table 2 and the definitions of all nouns in Figure 6 are the same as those in Figure 3.

Example 5
Similar to Example 4, we generate the corresponding Lyapunov equation describing system controllability of the IEEE 16-machine Frontiers in Energy Research | www.frontiersin.org February 2022 | Volume 10 | Article 796325 system. A ∈ R 35×35 and C ∈ R 35×35 of the Lyapunov equation are detailed in Supplementary Material. We set γ 100. The simulation results are shown in Table 5 and Figure 7. The definitions of all nouns in Figure 7 are the same as those in Figure 3.

Example 6
Similar to Example 4, we generate the corresponding Lyapunov equation describing system controllability of the IEEE 48machine system. A ∈ R 97×97 and C ∈ R 97×97 of the Lyapunov equation are detailed in Supplementary Material. We set γ 100. The simulation results are shown in Table 5 and Figure 8. The definitions of all nouns in Figure 8 are the same as those in Figure 3. It can be seen from Table 5 and Figures 6-8 that the neural dynamics method used to solve Lyapunov equations is also suitable for solving Lyapunov equations in power systems, and the reducedorder RNN mosdels proposed in this article is effective in the example of power systems. Moreover, with the increase in the power system scale, the convergence and steady-state accuracy of ZNN model are almost unchanged, indicating the applicability of the RNN model to power systems of different scales.
It is worth mentioning that the integration between the electric power and natural gas systems has been steadily enhanced in recent decades. The incorporation of natural gas systems brings, in addition to a cleaner energy source, greater reliability and flexibility to the power system (Liu et al., 2021). Since the dynamic model of the electricity-gas coupled system can be expressed by differential-algebraic equations (Zhang, 2005;Yang, 2020), which means the dynamic model of the electricity-gas coupled system is the same as that of the power system, the aforementioned applicability analysis of the methods proposed in this article for large power systems are also applicable to large electricity-gas coupled systems. Table 6 compares the time cost of the RNN model vectorization method proposed in this article and the traditional RNN model vectorization method. For the sake of convenience, the former is called method A and the latter is called method B. In order to better demonstrate the effect of the vectorization method of RNN model proposed in this article, four examples are added, as shown in Table 6. Four newly added examples are generated in the same way as Example 3 and are detailed in Supplementary Material, where ms means millisecond; scale means the order of A; and proportion refers to the time taken by method A divided by the time taken by method B.

An Efficient Method for Vectorization of RNN Model
It can be seen from Table 6 that method A is significantly better than method B in terms of time cost, with the decrease in time cost between 48 and 98%. With the increase in the size of the examples, the proportion of time cost improvement generally increases. It should be pointed out that when the system sizes are 15, 35, and 97, the corresponding examples are the IEEE standard systems mentioned before, which indicates that the vectorization method proposed in this article is also effective in the example of power systems. CONCLUSION 1) We propose an efficient method for vectorizing RNN models, which can achieve higher computational efficiency than the traditional method of vectorizing RNN based on the Kronecker product. 2) In order to reduce the solving scale of the RNN model, a reduced-order RNN model for solving the Lyapunov equation was proposed based on symmetry. At the same time, it is proved theoretically that the proposed model can maintain the same solution as that of the original model, and it is also proved that the proposed model is suitable for both the ZNN model and GNN model under linear or non-linear activation functions. 3) Several simulation examples are given to verify the effectiveness and superiority of the proposed method, while three standard examples of power systems are given to verify that the neural dynamics method is suitable for solving the Lyapunov equation of power systems.
Because the neural dynamics method has parallel distribution characteristics and hardware implementation convenience, its convergence and computation time are not sensitive to the system scale. Considering the current development level and trend of the very large-scale integration (VLSI) chip and the ultra large-scale integration (ULSI) chip, the wide application of the neural dynamics method in large-scale systems is expected.
In addition, the research on the RNN model used to solve the Lyapunov equation is mainly in the stage of theoretical improvement and exploration, and there are few reports about hardware products. The hardware product design will be the main content of the next stage.

DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.