ESERK Methods to Numerically Solve Nonlinear Parabolic PDEs in Complex Geometries: Using Right Triangles

In this paper Extrapolated Stabilized Explicit Runge-Kutta methods (ESERK) are proposed to solve nonlinear partial differential equations (PDEs) in right triangles. These algorithms evaluate more times the function than a standard explicit Runge–Kutta scheme (nt times per step), and these extra evaluations do not increase the order of convergence but the stability region grows with O(nt2). Hence, the total computational cost is O(nt) times lower than with a traditional explicit algorithm. Thus, these algorithms have been traditionally considered to solve stiff PDEs in squares/rectangles or cubes. In this paper, for the first time, ESERK methods are considered in a right triangle. It is demonstrated that such type of codes keep the convergence and the stability properties under certain conditions. This new approach would allow to solve nonlinear parabolic PDEs with stabilized explicit Runge–Kutta schemes in complex domains, that would be decomposed in rectangles and right triangles.


INTRODUCTION
Let us suppose that we have to solve a nonlinear PDE with dominating diffusion: subject to traditional initial and Dirichlet boundary conditions: and u(t,x,ȳ) = g 2 (x,ȳ) (x,ȳ) ∈ ∂( ).
These types of problems are very common in a large amount of areas such as atmospheric phenomena, biology, chemical reactions, combustion, financial mathematics, industrial engineering, laser modeling, malware propagation, medicine, mechanics, molecular dynamics, nuclear kinetics, etc., see [1][2][3][4][5][6][7][8][9], to mention a few. A widely-used approach for solving these time-dependent and multi-dimensional PDEs is to first discretize the space variables (with finite difference or spectral methods) to obtain a very large system of ODEs of the form y ′ = f (t, y); y(t 0 ) = y 0 ; where y, y 0 ∈ R n , t ≥ t 0 , and f (t, y) takes value in R n , this procedure is well-known as the method of lines (MOL). But these systems of ODEs not only have a huge dimension, additionally they might become very stiff problems.
Hence, traditional explicit methods are usually very slow, due to absolute stability, it is necessary to use very small length steps, see [6,7] and references therein. Therefore, these schemes are not usually considered.
Implicit schemes based on BDF and Runge-Kutta methods have much better stability properties. However, since the dimension of the ODE system is very high, then it is necessary to solve very large nonlinear systems at each iteration.
Other numerous techniques have also been proposed based on ETD schemes (but it is necessary to approximate operators including matrix exponentials), alternating direction implicit methods (they have limitations on the order of convergence) and explicit-implicit algorithms. However, in any case the number of operations is huge when the system dimension is high.
For those cases where it is known that the Jacobian eigenvalues of the function are all real negative or are very close to this semiaxis, there is another option: stabilized explicit Runge-Kutta methods (they are also called Runge-Kutta-Chebyshev methods). This happens, for example, when diffusion dominates in the PDE, when the Laplacian is discretized using finite differences or some spectral techniques, then the associated matrix has this type of eigenvalues.
These types of algorithms are totally explicit, but they have regions of stability extended along the real negative axis. These schemes typically have order 2 or 4 [8,[10][11][12][13][14][15][16]. Recently, we propose a new procedure combined with Richardson extrapolation to obtain methods with other orders of convergence [17,18], but in all these methods, these integrators have many more stages than the order of convergence. Most of these extra stages seek to extend as much as possible the region of stability along the negative real axis. Regions of stability increase quadratically with the number of stages. Thus, the cost per step is greater than in a classic Runge-Kutta, because it is necessary to evaluate the function in Equation (4) n t times. However, the number of steps reduce proportionally with n 2 t , thus the total computational cost is reduced proportionally with n t .
These schemes have been traditionally considered in squares/rectangles or cubes. But this makes difficult to apply them in PDEs with complex geometries, which happens in most of the cases. Some different strategies have been proposed to apply them when the original domain is not a square nor a cube (see [3][4][5]). They implemented stabilized Runge-Kutta methods after using adaptive multiresolution techniques or fixed mesh codes in space. But simulations in complex geometries constitute a very challenging problem, see (section 4, [5]), where they stated for their results based on adaptive multiresolution techniques that they "will only present here 2D and 3D simulations in simplified geometries for the sake of assessing our results and perspectives in the field." As far as we know, stabilized explicit Runge-Kutta methods have not been tested in triangles yet. For this reason, in this paper, we are analysing how ESERK methods can be employed to solve nonlinear PDEs in these types of regions and their convergence.
In this paper, a summary on ESERK4 methods is provided in section 2. The major advance of our contribution is given in section 3: it is explained how ESERK4 can be utilized for (1) when is a right triangle. After some linear transformations and spatial discretizations ESERK4 is numerically stable and fourth-order convergence in time, and second-order in space is obtained. This allows a new way to numerically approach parabolic nonlinear PDEs in complex domains in the plane, which can be easier decomposed in a sum of triangles and rectangles. Finally, some conclusions and future goals are outlined.

Construction of First-Order Stabilized Explicit Methods
In [17], ESERK4 schemes were developed for nonlinear PDEs in several dimensions with good stability properties and numerical results in squares and cubes. The idea is quite simple: first-order stabilized explicit Runge-Kutta (SERK) methods are derived using Chebyshev polynomials of the first kind: (5) s being the number of stages of the first-order method.
We can construct Runge-Kutta schemes with |R s (z)| as stability functions by just changing x = 1 + α px (and considering that 1, T s (x) andx are the stability functions of Identity operator, g s , and hf (·), and writing R s (z) as a linear combination of the Chebyshev polynomials, see Theorem 1 [12] for more details).

Construction of Higher-Order ESERK Schemes
Once first-order SERK methods have been derived, they approximate the solution of the initial value problem (4), by performing n i steps with constant step size h i at x 0 + h, i.e., Finally is a fourth-order approximation with P 4s (z) = −R s (z)+24(R s (z/2)) 2 −81(R s (z/3)) 3 +64(R s (z/4)) 4 6 .

Parallel, Variable-Step, and Number of Stages Algorithm
In [17], we constructed a variable-step and number of stages algorithm combining all the schemes derived there, with s up to 4,000. The idea is quite simple: (i) First, we select the step size in order to control the local error; the best results were obtained using techniques considered for standard extrapolation methods (see Equations (8)(9)(10)(11) in [17]). (ii) Later the minimum s is chosen so that the absolute stability is satisfied.
Recently, we are working developing the parallel version of this code (see [19]). Using 4 threads, CPU times are up to 2.5 times smaller than in the previous sequential algorithm. The new parallel code also has a decreasing memory demand, and therefore it is possible to solve problems with higher dimension in regular PCs.

DECOMPOSITION OF COMPLEX GEOMETRIES INTO RIGHT TRIANGLES
Complex geometric shapes are ubiquitous in our natural environment. In this paper, we are interested in numerically solving partial differential equations (PDEs) in such types of geometries, which are very common in problems related with human bodies, materials, or simply a complicated engine in classical engineering applications.
One very well-known strategy, within a finite element context, is to build the necessary modifications in the vicinity of the boundary. Such an approach is studied in the composite finite element method (FEM). Those methods based on finite element are usually proposed only for linear PDEs. FEM is a numerical method for solving problems of engineering and mathematical physics (typical problems include structural analysis, heat transfer, fluid flow, mass transport, or electromagnetic potential, because these problems generally require numerically approximating the solution of linear partial differential equations). The finite element method allows the transformation of the problem in a system of algebraic equations. Unfortunately, it is more difficult to employ these techniques with nonlinear parabolic PDEs in several dimensions, although some results have been obtained to know when the resulting discrete Galerkin equations have a unique solution in [20]. However, for some problems, some of these techniques are not easy to be employed numerically, they are computationally very expensive because they require solving nonlinear systems with huge dimension at every step, or it is difficult to demonstrate that the numerical schemes have unique solution in a general case.
On the other hand, Implicit-Explicit (IMEX) methods have been employed to solve a very stiff nonlinear system of ODEs coming from the spatial discretization of nonlinear parabolic PDEs that appeared in the modelization of an ischemic stroke in [5]. The authors employed an adaptive multiresolution approach and a finite volume strategy for the spatial discretizations. And a Strang splitting method in time, combining ROCK4, an explicit Stalized Explicit Runge-Kutta scheme for the diffusion part, and Radau5, an implicit A-stable method for the reaction. These methods were previously analyzed in [3] for streamer discharge simulations, and the authors demonstrated secondorder convergence in time. Later, they employed similar strategies for different physical problems in [4,21]. As the authors state, some of these procedures are complicated except in simple domains like squares and cubes, and only second-order convergence in time is possible. However, there are complex problems where nonlinear terms have potentially large stiffness, and at the same time, it is necessary to efficiently solve the model with small errors. This motivates to derive high-order schemes with good internal stability properties.
In what follows we will explain a new strategy to numerically solve the nonlinear parabolic PDE given by Equation (1) where is any right triangle, and therefore any researcher can combine the theory (utilized with FEM) to spatially decomposed any complex geometry into triangles (since any acute triangle and obtuse triangle can be decomposed into two right triangles), and later employing the method described in this paper. Additionally, schemes proposed in this work are fourth-order ODE solvers (in time), and numerical spatial approximations will be second-order (although fourth-order formulae can be explored except for the closest points to vertices).

Higher-Order Spatial Approximations in the Triangle
Without loss of generality we can consider that our right triangle is T R , the one with vertices (0, 1), (0, 0), (1, 0). Otherwise we first use a linear transformation of the right triangle with vertices where the parameters a 1 , a 2 , b 1 , b 2 can be computed easily as where Det = Frontiers in Physics | www.frontiersin.org and it is easy to check that Det = 0 if and only the three points are not in a line (but we always have a triangle).
The main reason of decomposing our general region ∈ R 2 into right triangles (and not other triangles) is, that after this linear transformation given by Equations (9) and (10), our PDE given by Equation (1) transforms into the Equation subject to (traditional) initial and Dirichlet boundary conditions. Therefore, let us first study Equation (12), together with and where ∂(T R ) is the border of the triangle with vertices (0, 1), (0, 0), (1, 0). One positive issue is that, after the traditional spatial discretization described below, the matrix obtained from the diffusion term has all the eigenvalues real, and therefore we can utilize the ESERK methods proposed in the previous section 2. Now, let us define the spatial discretization of our continuous problem provided by Equation (12), the problem domain T R is discretized by the grid points (x i , y j ), where since x i + y j ≤ 1. With this semidiscretizations we will approximate u xx and u yy at point (x i , y j ) with the following second-order formulae: (16) After the linear transformation given by Equations (9) and (10), our PDE given by Equation (1) may transform into one Equation where one term in u xy would appear. Normally, this term can be approximated in the square or the rectangle through the formula however, in T R , we can obtain that the point (x i , y j ) is in T R , i.e., x i + y j < 1, but the point (x i+1 , y j+1 ) might not satisfy that x i+1 + y j+1 ≤ 1, and therefore we cannot employ these finite difference formulae if a term in u xy appears. Fortunately, we will check that this term cancels after this transformation [given by Equations (9 and 10)] whenever the original triangle with vertices (x 1 ,ȳ 1 ), (x 0 ,ȳ 0 ), and (x 2 ,ȳ 2 ) is a right triangle and (x 0 ,ȳ 0 ) is the vertex of the right angle. This fact is explained in Figure 1. If we FIGURE 1 | Spatial discretization in the right triangle T R . We need to approximate partial derivatives of u in the interior (orange points), and therefore we have obtained in the previous steps approximations of the function in the interior, and the points in the border (blue points), but we do not have these values outside of T R (red points).
would need to approximate ∂ 2 u 2,1 ∂x∂y , then it would be necessary to obtain an approximation of u 3,2 , but this point is outside of the T R , the region of study.
In this work, we are employing only second-order approximations in space. In other works, for example [13], we have also employed SERK codes after higher-order discretizations in space, but in rectangles. Normally, in rectangles, we can use formulae similar to and in the lower edge However, in the triangle, again we can observe in Figure 1, that we would need to approximate the solution in points outside T R before we can calculate (19) near the vertex (0, 1). Obviously, one possible idea for the future is considering the decomposition of complex regions into bigger rectangles in the interior, and small right triangles near the border of the complex region where it is necessary to solve the PDE. Now, we are ready to understand why we chose right triangles in the decomposition of complex regions. The main reason is, that simple calculations give us [after linear transformations given by Equations (9-11)]: uxx + uȳȳ = a 1 a 1 u xx + b 1 u xy + b 1 a 1 u xy + b 1 u yy +a 2 a 2 u xx + b 2 u xy + b 2 a 2 u xy + b 2 u yy , (20) and therefore, after this linear transformation, uxx + uȳȳ has the following term in u xy If we change a 1 , a 2 , b 1 , and b 2 for their values given by Equation (10) which is 0 if and only if the vectors − − → P 2 P 0 and − − → P 0 P 1 are orthogonal, i.e., if they form a right angle at P 0 .
Thus, if the original triangle has a right angle at P 0 , there is not a term in u xy , and we can use the secondorder approximations in space, with the spatial discretization described above. Additionally, the following theorem guarantee that ESERK methods can be employed (with numerical stability and good results) in this right triangle to solve the PDE given by Equations (1)-(3) after the linear transformation given by Equations (9)-(11) and the spatial discretization given by Equation (15): Theorem: Let Equations (1)-(3) be the PDE to be solved, and a right triangle with a right angle at P 0 . After linear transformation given by Equations (9) and (10), this PDE transforms into Equations (12)- (14), which can be discretized by Equations (15) and (16), transforming into the system of ODEs F(t, x i , y j , u ij ) being the sum of f (t, x, y, u) at the grid points plus the function given by the spatial discretization of the derivatives at the boundary.
The associate matrix, A, to the terms c 1 u xx + c 2 u yy (with c 1 , c 2 ≥ 0) is real and symmetric, and therefore all the eigenvalues of this matrix are negative and real. Hence, Extrapolated Stabilized Explicit Runge-Kutta are numerically stable whenever ∂ u [F(t, x, y, u)] does not modify this type of eigenvalues (real and negative) in the Jacobian function and s > 4 t(µ + σ ) (µ being c 1 h 2 and σ = c 2 h 2 ). Therefore, ESERK4 methods can solve Equations (1)-(3) with a fourth-order convergence in time, and second in space.
Proof: It only remains to study the associate matrix A. But simple calculations allow us to obtain that 0 i,j is the i × j matrix with all the values equal 0, Id i is the identity matrix of dimension i, µ = c 1 h 2 and σ = c 2 h 2 , and therefore A is a symmetric real matrix.
Finally, it is well-known that all the eigenvalues of any symmetric real matrix A are real. Let us suppose that (λ, v) is a complex pair of A, i.e., an eigenvector v = x + yi ∈ C n , where x, y ∈ R n and λ = a + bi ∈ C is the corresponding eigenvalue with a, b ∈ R. Therefore, Ax + iAy = Av = λv = (ax − by) + i(bx + ay).
Therefore, whenever the nonlinear part does not modify this type of eigenvalues (real and negative) in the Jacobian function, a bound of the spectral radius of the Jacobian is 4(µ + σ ), and we merely need to use an ESERK method with s > 4 t(µ + σ ) to guarantee numerical stability.

NUMERICAL EXAMPLE
Let us now study the numerical behavior of ESERK methods in a right triangle with one example. We will consider is the triangle with vertices (−1, 1), (−3, 2), and (0, 3) and initial and boundary conditions are taken such that u(t,x,ȳ) = e −t sin π (ȳ−2x−3) 5 is its solution. Hence, we first consider the linear transformation given by Equations (9) and (10), i.e., a 1 = −2/5, a 2 = 1/5, b 1 = 1/5, b 2 = 2/5. In this way Equation (30) transforms into the Equation and initial and boundary conditions are taken such that u(t, x, y) = e −t sin (πx) is its solution. Now, it is possible to utilize second-order approximations in space, as it was explained in the previous section. ESERK4 with s = 100 and 150 where considered for this numerical experiment with different values of h = x = y and k = t. Numerical convergence at several points was analyzed with both methods, and numerical errors at two points [p 1 = (t, x, y) = (1, 0.15, 0.15) and p 2 = (t, x, y) = (1, 0.5, 0.25)] are shown in Tables 1, 2.
First of all, we calculated all the eigenvalues of the matrix A after spatial discretization. As it was demonstrated in Theorem  In both Tables 1, 2, if we take k = 0.2 (also with k = 0.1), we can observe that errors are similar with the three different values h = 0.025, 0.0125, and 0.00625 at many of the points. In this case, most of the error is due to the temporal discretization. Actually, in L 2 norm, errors with constant k = 0.2 grow when h decrease for the three step lengths in space, this is because there are more points and they are close to the border.
If we take h = 0.025 constant, and we vary k = 0.2, 0.1, and k = 0.05, in general we observe that errors in most of the points decrease between k = 0.2 and 0.1, however, if we only compare the errors with h = 0.025, k = 0.1 and k = 0.05, errors are similar at most points (and also in L 2 norm). Obviously, this is because, with h = 0.025, k = 0.1, or k = 0.05, part of the error is due to the spatial discretization. Therefore, it is not so easy to observe 4−th order convergence in time and 2−nd in space. If we choose h = 0.00625, then most of the error with k 1 = 0.2, k 2 = 0.1, and k 3 = 0.05 is due to temporal discretization. Hence, calculating log k 1 /k 3 err 1 err 3 (these values are called Temporal convergence in Tables 1, 2, err 1 being the error with k 1 , and err 3 being the error with k 3 ) we can observe numerical rates in the range 3.6-4.3 in general, which gives us a good idea of the fourth-order convergence in time of ESERK4 schemes. Now, if we fix k = 0.05, and we repeat the process with h 1 = 0.025, h 2 = 0.0125, and h 3 = 0.00625, we observe that between h 1 and h 2 errors divide (more or less) by 4 which gives us a good idea of the second order in space of the discretization proposed for the right triangle. However, with k = 0.05, and h 3 a part of the error is due to the temporal discretization. Thus, if we calculate log h 1 /h 3 err 1 err 3 (these values are called Spatial convergence in Tables 1, 2, err 1 being the error with h 1 , and err 3 being the error with h 3 ), we observe numerical rates in the range 1.2-1.7.
Since, part of the error with k = 0.05 is due to the temporal discretization, and the temporal convergence is fourth-order, let us choose a smaller k 4 = 0.025, and repeat the process with this length step in time. In Table 3, errors with both methods (s = 100 and s = 150), and h = x = y = 0.025, 0.0125, and 0.00625 are shown at both points, p 1 and p 2 . Now, most of the errors are because of the spatial discretization, and we can observe that the numerical spatial convergence rates are in the range 1.5-2.7. They suggest that the numerical convergence rate is 2 as it was expected from the previous theoretical analysis.
In Figure 2, the exact solution and the numerical approximation obtained with ESERK4 with s = 150, k = t = 0.025, h = x = y = 0.00625 are shown. We can check that both plots look identical.

CONCLUSIONS AND FUTURE GOALS
In this paper, for the first time, ESERK schemes are proposed to solve nonlinear partial differential equations (PDEs) in right triangles. These codes are explicit, they do not require to solve very large systems of linear nor nonlinear equations at each step. It is demonstrated that such type of codes are able to solve nonlinear PDEs in right triangles. They keep the order of convergence and the absolute stability property under certain conditions. Hence, this paper opens a new line of research, because this new approach will allow, in the future, to solve nonlinear parabolic PDEs with stabilized explicit Runge-Kutta schemes in complex domains, that would be decomposed in rectangles and right triangles.
Additionally, we consider that this procedure can be extended to tetrahedron and other simplixes for the solution of multidimensional nonlinear PDEs in complex regions in R n .

DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.

AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.

FUNDING
The authors acknowledges support from the University of Salamanca through its own Programa Propio I, Modalidad C2 grant 18.KB2B.