## ORIGINAL RESEARCH article

Front. Phys., 17 April 2023
Sec. Interdisciplinary Physics
Volume 11 - 2023 | https://doi.org/10.3389/fphy.2023.1167797

# A modern analytic method to solve singular and non-singular linear and non-linear differential equations

• 1Department of Mathematics, Faculty of Science, Al Balqa Applied University, Salt, Jordan
• 2Department of Mathematics, Faculty of Science, Zarqa University, Zarqa, Jordan

This article circumvents the Laplace transform to provide an analytical solution in a power series form for singular, non-singular, linear, and non-linear ordinary differential equations. It introduces a new analytical approach, the Laplace residual power series, which provides a powerful tool for obtaining accurate analytical and numerical solutions to these equations. It demonstrates the new approach’s effectiveness, accuracy, and applicability in several ordinary differential equations problem. The proposed technique shows the possibility of finding exact solutions when a pattern to the series solution obtained exists; otherwise, only rough estimates can be given. To ensure the accuracy of the generated results, we use three types of errors: actual, relative, and residual error. We compare our results with exact solutions to the problems discussed. We conclude that the current method is simple, easy, and effective in solving non-linear differential equations, considering that the obtained approximate series solutions are in closed form for the actual results. Finally, we would like to point out that both symbolic and numerical quantities are calculated using Mathematica software.

## 1 Introduction

A differential equation or a system of differential equations, along with proper boundary and initial conditions (ICs), is one of the most common outputs when mathematical modeling describes physical, biological, or chemical phenomena. Finding ordinary or partial differential equations and analyzing their solutions are at the heart of applied mathematics [1].

Since ancient times, differential equations have attracted the interest of researchers and scientists from two sides. The first is how to use them to express phenomena and issues that interest them in their specializations and research. On the other hand, there is the question of how to solve these equations. There are a limited number of differential equations, especially linear ones, whose solutions can be determined using the well-known traditional methods based on finite and simple algebraic operations. In contrast, there are many kinds of differential equations that still require the search for simple and accurate solutions. For this reason, the interest of mathematicians in previous decades was and is still in the investigation for analytical and sometimes numerical methods to solve these forms of differential equations.

Many analytical, numerical, and numero-analytical techniques have been proposed previously and recently to provide solutions for differential equations with initial or boundary conditions, such as the Laplace and Fourier transforms method [2], the Adomian decomposition method [35], the variational iteration method [68], the homotopy perturbation method [911], the homotopy analysis method [12, 13], the differential transformation method [1416], the finite difference method [17], the predictor–corrector method [18, 19], the first integral method [20, 21], the Adams–Bashforth Molten method [22], the new iterative method [23, 24], the Crank–Nicolson method [25, 26], the reproducing kernel method [27, 28], the Laplace Adomian decomposition method [11, 29, 30], the He–Laplace method [3133], and others [3436].

Recently, Eriqat et al. [37] presented a new hybrid method in which they combined the Laplace transform (LT) method with the residual power series (PS) method to establish series solutions of the pantograph equation. This method is called the LRPS method, and it simulates the residual PS method but with a different construction and view. It uses the limit concept instead of the concept of the derivative as in the residual PS method. The LRPS method uses the LT to transfer the given differential equation to a new algebraic equation in a new space. The obtained algebraic equation is solved by assuming that it is a solution that has a Laurent series (LS) form. The values of the coefficients of the LS are determined by utilizing the limit at infinity. Then, the inverse LT is used to transfer the LS, which is the solution of the algebraic equation in the Laplace space, to the initial space. Thus, we have obtained the solution to the original problem in the form of PS.

Indeed, the LRPS method is similar to the He–Laplace method’s [3133] idea of searching for the solution of differential equations in Laplace space. The He–Laplace technique uses the variational iteration method or homotopy perturbation method to solve the just-transformed Laplace space. In contrast, the LRPS method uses the PS method to solve that equation using the Laurent series instead of the Taylor series. In addition, the LRPS method is just an easy and fast technique for finding the PS solution coefficients of the differential equations.

The LRPS method has won the admiration and interest of many researchers due to its ease, speed, and efficiency in arriving at exact or accurate approximate solutions to many equations. In addition, the LT was employed in dealing with non-linear problems because it is known that the LT deals only with some categories of linear equations. In 2021, El-Ajou [38] adapted the LRPS method to establish solitary solutions of non-linear time-fractional dispersive partial differential equations and to introduce a vector series solution of some types of hyperbolic system of Caputo time-fractional partial differential equations with variable coefficients [39]. Recently, the LRPS method was used for solving time-fractional Navier–Stokes equations [40], fuzzy quadratic Riccati DEs [41], Lane–Emden equations of fractional order [42], a system of fractional initial value problems (IVPs) [43], autonomous n-dimensional fractional non-linear systems [44], and others [4549].

Despite the extensive publication of research dealing with the new method, all works dealt with specific problems devoid of complexity and generality. Therefore, we aim in this manuscript, first, to employ the LRPS method to provide exact or accurate approximate analytic series solutions to linear ordinary differential equations (ODEs) in their general form, whether their coefficients are constants or analytical functions, which have the following formula:

$dnydtn=Ltyt+gt,t≥0.$

Subject to the ICs,

$y0=y0,y′0=y1,…,yn−10=yn−1,$

where $Lt$ is a linear differential operator of order $n−1$ with coefficients being analytic functions. This general equation is difficult to solve by the direct PS method. Herein lays the importance and novelty of the aim of this research.

Since in our world, most events are essentially non-linear and modeled by non-linear equations, the study of non-linear issues is critical in mathematics and physics, engineering, economics, and other disciplines. Solving non-linear problems is difficult, and getting an analytical approximation of a given problem is often more complicated than getting a numerical one. Therefore, the second aim of this paper is to establish analytic approximate solutions to the general form of non-linear ODEs, which have the following form using the proposed method (LRPS method):

$dnydtn=ft,yt,dydt,d2ydt2,…,dn−1ydtn−1,t≥0.$

Subject to the ICs,

$y0=y0,y′0=y1,…,yn−10=yn−1,$

where $f$ is an analytic function on $0,∞)$.

The third objective of this article is to provide a series solution to the singular ODEs, whether linear or non-linear. This type of equation is of great interest to researchers in providing analytical solutions to it, as it appears in the models of many natural phenomena, as well as the difficulty of providing solutions to it.

To determine the efficiency and applicability of the method, we test for three types of errors: exact error, relative error, and residual error. We present the numerical results of the resulting solutions through prepared and organized tables. In addition, we sketch the obtained approximate solution by the proposed method along with the exact solution if we can obtain it to make the comparison on the one hand and to determine the period of convergence to solve the series on the other hand.

## 2 Basic facts of the LT and PS

In this section, we overview essential facts about the LT and the PS, along with some properties that are needed in this article.

Definition 2.1. [50]). We assume that $yt$ is a continuous function defined for $t≥0$ and let $s∈I⊆R$. Then, the LT of $yt$ is the function $Ys$, denoted and defined as follows:

$Ys=Lyts=∫0∞e−stytdt,$

where the improper integral is the convergence on an interval of $s$, which represents the domain of $Ys$.Also, the inverse LT of a function $Ys,s∈I$ is the function $yt,t≥0$ that is denoted and defined as

$yt=L−1Yst=∫c−i∞c+i∞estYsds,c=Res>c0,$

where $c0$ lies in the right-half plane of the absolute convergence of the Laplace integral.

Lemma 2.1. [50]). Suppose that $yt$ and $xt$ are both continuous functions defined on $0,∞)$, $Ys=Lyt$, $Xs=Lxt$, and $η,λ$ are constants. Then, we have the following properties:

1) $\mathcal{L}\left[{e}^{\lambda t}y\left(t\right)\right]=Y\left(s-\lambda \right)$

2) $\mathcal{L}\left[{t}^{n}y\left(t\right)\right]={\left(-1\right)}^{n}\frac{{d}^{n}}{d{s}^{n}}Y\left(s\right)$

3) $\mathcal{L}\left[y\left(\lambda t\right)\right]=\frac{1}{\lambda }Y\left(\frac{s}{\lambda }\right),\lambda >0$

4) ${\mathcal{L}}^{-1}\left[\eta Y\left(s\right)+\lambda X\left(s\right)\right]=\eta {\mathcal{L}}^{-1}\left[Y\left(s\right)\right]+\lambda {\mathcal{L}}^{-1}\left[X\left(s\right)\right]=\eta \text{\hspace{0.17em}}y\left(t\right)+\lambda x\left(t\right)$5) $\underset{s\to \infty }{\mathrm{lim}}sY\left(s\right)=y\left(0\right)$

6) $\mathcal{L}\left[{y}^{\left(n\right)}\left(t\right)\right]={s}^{n}\mathcal{L}\left[y\right]-\sum _{k=0}^{n-1}{s}^{n-k-1}{y}^{\left(k\right)}\left(0\right)$

Definition 2.2. [51]). A series that has the representation

$∑n=−∞∞cns−s0n=∑n=1∞c−ns−s0n+∑n=0∞cns−s0n$

is called the LS about $s=s0$, where $s$ is the variable and $cn$'s are the coefficients of the series. The series $∑n=0∞cns−s0n$ is known as the analytic or regular part of the LS, while $∑n=1∞c−ns−s0n$ is known as the singular or the principal part of the LS.

Theorem 2.1. [50]). Let $yt$ be an analytic function defined on the domain $D:ξ1. Then, $yt$ can be expanded as a PS as follows:

$yt=∑n=0∞cnt−t0n,$

which is valid for $ξ1.

Theorem 2.2. If $Ys=Lyt$ has an LS representation about $s=0$,

$Ys=c0s+∑n=1∞cnsn+1,s>0,$

then $cn=yn0,n=0,1,2,….$.Proof. Suppose that $Ys$ can be represented by the LS expansion as in Eq. 2.5. So,

$sYs=c0+∑n=1∞cnsn,s>0.$

According to part (5) of Lemma 2.1, we have $c0=y0.$Multiplying Eq. 2.6 by $s$ gives the following expansion:

$s2Ys−y0s=c1+∑n=2∞cnsn,s>0.$

Using part (5) of Lemma 2.1, it is obvious that

$c1=lims→∞c1+∑n=2∞cnsn=lims→∞s2Ys−sy0=lims→∞ssYs−y0=lims→∞sLy′t=y′0.$

Similarly, multiplying Eq. 2.7 by $s$ gives the following expansion:

$ss2Ys−y0s−y′0=c2+∑n=3∞cnsn,s>0.$

Again, by parts (5) and (6) of Lemma 2.1, we have

$c2=lims→∞c2+∑n=3∞cnsn=lims→∞ss2Ys−sy0−y′0=lims→∞sLy″t=y″0.$

Now, we can find out the general formula for the coefficient $cn$. However, we can get it by multiplying Eq. 2.6 by $sn+1$ and taking the limit of the resulting equation as $s→∞$; then, we find that $cn=yn0,n=0,1,2,…$. Thus, the proof is now complete.

Theorem 2.3. assume that $Lyt=Ys$ can be represented as in Eq. 2.6. If $s Lynt≤K$, on $0, then the reminder $Rns$ of the expansion of the LS appearing in Theorem 2.2 will satisfy the relation

$Rns≤Ksn+2,0

Proof.First, we assume that for $r=0,1,2,…,n+1$, $Lyrts$ is defined on $0. Also, we assume the following:

$s Lyn+1t≤K,0

From the definition of the reminder $Rns=Ys−∑i=0nyi0si+1$, one can acquire

$sn+2Rns=sn+2Ys−∑m=0nsn+1−mym0=ssn+1Ys−∑i=0nsn−mym0=s Lyn+1t.$

Eq. 2.10 and Eq. 2.11 lead to the conclusion that $sn+1α+1Rns≤K$. Thus,

$−K≤sn+2Rns≤K,0

The inequality $Rns≤Ksn+2$ can be discovered by reformulating Eq. 2.12, and so, we got the result.

## 3 Constructing series solutions to ODEs

In this section, we first use the LRPS method to solve linear ODEs in preparation for solving non-linear ODEs. What is worth noting is the possibility of solving non-linear ODEs, which cannot be carried out using the traditional LT method. We will use the construction that we will get to solve non-linear ODEs when solving singular ODEs, whether linear or non-linear, and this is what we will see in Section 3.3.

### 3.1 LRPS method for solving linear ODEs

In this section, we demonstrate the steps of the LRPS method for solving linear ODEs. The basic idea of the proposed method is to apply the LT to the linear ODEs and then use the LRPS approach to construct a series solution, in LS form, to the transformed equation. Then, we transform the obtained solution into the required solution in the original space.

To illustrate the idea of the LRPS method in constructing series solutions to the linear ODEs, we consider problems (1.1) and (1.2), considering that $Lt$ is a linear differential operator given by

$Lt=an−1tdn−1dtn−1+…+a1tddt+a0t,$

where $a0t,a1t,...,an−1t$ and $gt$ are arbitrary analytic functions that depend only on $t$, $yt$ is the unknown function of the independent variables $t$, and $I$ is an open interval.

To generate the LRPS solution of the IVP (1.1) and (1.2), first, we apply the LT to both sides of Eq. 1.1 to obtain

$Lynt=LLtyt+Lgt,t∈I.$

Using ICs (1.2) and some properties of the LT, Eq. 3.2 becomes

$Ys=∑i=0n−1yisi+1+1sn LLtL−1Ys+Gssn,s>0.$

We assume that $Ys$ in Eq. 3.3 has an expansion in the LS form as

$Ys=∑i=0∞cis1+i,s>0.$

Depending on Theorem 2.3 and the given conditions in Eq. 1.2, the first $n$-coefficients of the expansion (3.4) can be determined, so it can be rewritten as follows:

$Ys=∑i=0n−1yisi+1+∑i=n∞cis1+i,s>0.$

The $k$th-truncated series of $Ys$ is given by

$Yks=∑i=0n−1yisi+1+∑i=nkcis1+i,s>0.$

Thus, one can conclude

$Yns=∑i=0n−1yisi+1+cns1+n,s>0.$

To find the values of the unknown coefficients in series (3.7), we define the Laplace residual function (LRF) of Eq. 3.3 as

$LRess=Y s−∑i=0n−1yisi+1−1sn LLtL−1Ys−Gssn,s>0$

and the $k$th LRF as

$LResks=Yk s−∑i=0n−1yisi+1−1sn LLtL−1Yks−Gssn,s>0.$

It is clear that $Limk→∞LResks=LRess$, $LRess=0$, and thus, $skLRess=0$ for $s>0$ and $k=0,1,2,3,…$. Therefore, $Lims→∞skLRess=0.$ Moreover,

$Lims→∞sk+1LRess=Lims→∞sk+1LResks=0,k=1,2,3,…$

Substituting the first $n$th-truncated series in Eq. 3.7 into the $n$th LRF to obtain

$LResns=cns1+n−1sn LLtL−1∑i=0n−1yisi+1+cns1+n−Gssn,s>0.$

Running the inverse LT in Eq. 3.11, we get

$LResns=cns1+n−1sn LLt∑i=0n−1yii!ti+cnn!tn−Gssn,s>0.$

Since the coefficients of the linear operator in Eq. 3.1 are analytic functions, they can be expressed as

$art=∑j=0∞λrjtj,r=0,1,…,n−1,$

where

$λrj=arj0j!,r=0,1,…,n−1,j=0,1,2,… .$

So, the linear operator $Lt$ in Eq. 3.1 can be expressed as

$Lt=∑r=0n−1∑j=0∞λrjtjdrdtr.$

Running the operator $Lt$ on Eq. 3.12 according to its new form in Eq. 3.15, we get

$LResns=cns1+n−1sn L∑r=0n−1∑j=0∞cn λrjn−r!tn+j−r+∑i=rn−1λrjyii−r!ti+j−r−Gssn.$

Finally, we run the LT in Eq. 3.16 to obtain the required form of the $n$th LRF:

$LResns=cns1+n−Gssn−1sn ∑r=0n−1∑j=0∞cn λrjn−r!n+j−r!s1+n+j−r+∑i=rn−1λrjyii−r!i+j−r!s1+j+i−r.$

Now, multiplying Eq. 3.17 by $sn+1$, we get the following function:

$sn+1LResns=cn−sGs−∑r=0n−1∑j=0∞cn λrjn−r!n+j−r!sn+j−r−∑r=0n−1∑j=0∞∑i=rn−1λrjyii−r!i+j−r!sj+i−r.$

Taking the limit at infinity to Eq. 3.18, according to Eq. 3.10, we get

$cn=g0+∑r=0n−1λr0yr.$

Thus, the first approximation of the solution of Eq. 3.3 is

$Yns=y0s+y1s2+y2s3+…+yn−1sn+1sn+1g0+∑r=0n−1λr0yr.$

Following that, one can find the value of the coefficient $cn+1$; to do that, we substitute the $n+1$th-truncated series, $Yn+1s=y0s+y1s2+y2s3+…+yn−1sn+cnsn+1+cn+1sn+2$, into the $n+1$th LRF to get the following:

$LResn+1s=cnsn+1+cn+1sn+2−1sn LLtL−1∑i=0n−1yisi+1+cns1+n+cn+1sn+2−Gssn.$

Performing the previous steps, we obtain the final form of the $n+1$th LRF:

$LResn+1s=cnsn+1+cn+1sn+2−Gssn−1sn ∑r=0n−1∑j=0∞cn+1 λrjn+1−r!n+1+j−r!s2+n+j−r+cn λrjn−r!n+j−r!s1+n+j−r+∑i=rn−1λrjyii−r!i+j−r!s1+j+i−r.$

Again, we multiply Eq. 3.22 by $sn+2$ to obtain

$sn+2LResn+1s=cn+1−s2Gs+sg0+s∑r=0n−1λr0yr−∑r=0n−1∑j=0∞cn+1 λrjn+1−r!n+1+j−r!sn+j−r+cn λrjn−r!n+j−r!sn+j−r−1+∑i=rn−1λrjyii−r!i+j−r!si+j−r−1.$

Computing the limit at infinity to both sides of the last equation and using Eq. 3.10, we get

$cn+1=g′0+cn λn−10+1!∑r=0n−1λr1yr+∑r=0n−2λr0yr+1.$

So, the second approximation of the solution of Eq. 3.3 is

$Yn+1s=y0s+y1s2+y2s3+…+yn−1sn+1sn+1g0+∑r=0n−1λr0yr+1sn+2g′0+g0+∑r=0n−1λr0yr λn−10+∑r=0n−1λr1yr+∑r=0n−2λr0yr+1.$

Like the previous steps, we have

$cn+2=g″0+cn+1 λn−10+cn λn−20+2!1!cn λn−11+2!∑r=0n−1λr2yr+∑r=0n−2λr1yr+1+∑r=0n−3λr0yr+2.$

Repeating the steps, one can obtain

$cn+3=g‴0+3!∑i=02∑r=0icn+2−i λn−1−i+rr3−r!+3!∑i=03∑r=0n−1−iλr3−iyr+ii!.$

Considering a pattern of the obtained coefficients, we easily deduce the coefficient $cn+k$ as follows:

$cn+k=gk0+k!∑i=0k−1∑r=0icn+k−1−i λn−1−i+rrk−r!+k!∑i=0k∑r=0n−1−iλrk−iyr+ii!,k=0,1,… .$

According to Eq. 3.14, the recurrence relation (3.28) becomes as follows:

$cn+k=gk0+∑i=0k−1∑r=0ikrcn+k−1−i an−1−i+rr0+∑i=0k∑r=0n−1−ikiyr+iark−i0.$

Thus, we can express the $k+1$)th-approximate solution of Eq. 3.3 by the following formula:

$Yn+ks=∑i=0n−1yisi+1+∑i=0kcn+is1+i+n,s>0,k=0,1,…..$

Therefore, the exact solution of Eq. 3.3 can be expressed as

$Ys=∑i=0n−1yisi+1+∑i=0∞cn+is1+i+n.$

Substituting the result in Eq. 3.29 into Eq. 3.31 and running the inverse LT gives the solutions of IVP (1.1) and (1.2) as follows:

$yt=∑i=0n−1yii!ti+∑i=0∞ti+ni+n!gi0+∑j=0i−1∑r=0jircn+i−1−j an−1−j+rr0+∑j=0i∑r=0n−1−jijyr+jari−j0.$

### 3.2 The LRPS method for solving non-linear ODEs

This section introduces the steps of the LRPS approach in solving non-linear ODEs. To explain the methodology of the proposed method in constructing series solutions to this class, we consider IVP (1.3) and (1.4).

To generate the LRPS solution of the IVP (1.3) and (1.4), we consider the first step; that is, operating the LT to Eq. 1.3 and utilizing conditions (1.4), we obtain

$Ys=∑i=0n−1yisi+1+1snΨs,Ys,dYds,d2Yds2,…,dmYdsm,s>0,$

where $Ψ$ is a multivariable function of $s,Ys,dYds,d2Yds2,$ and $dmYdsm$, $m∈N$.

We assume that $Ys$ given in Eq. 3.33 can be expanded as in Eq. 3.4. According to the conditions given in Eq. 1.4 and Theorem 2.3, series (3.4) also has the form in Eq. 3.5, and the $k$th-truncated series of $Ys$ will be like Eq. 3.6.

To set the values of the unknown coefficients in Eq. 3.6, according to Eq. 3.33, we define the LRF of Eq. 3.33 as

$LRess=Ys−∑i=0n−1yisi+1−1snΨs,Ys,dYds,d2Yds2,…,dmYdsm,s>0$

and the kth LRF as

$LResks=Yks−∑i=0n−1yisi+1−1snΨs,Yks,dYkds,d2Ykds2,…,dmYkdsm,s>0.$

According to the form of $Yks$ as in Eq. 3.6, it is clear that $Ψs,Yks,dYkds,d2Ykds2,…,dmYkdsm$ has a finite LS as follows:

$Ψs,Yks,dYkds,d2Ykds2,…,dmYkdsm=∑i=0kϕcn,cn+1,…,cn+is1+i,s>0,$

where $ϕ$ is a multivariable function of $cn,cn+1,…,cn+i$, for $i=0,1,…,k$.

Substituting the expansions (3.6) and (3.36) in (3.35) gives the following expansion form of the $k$th LRF:

$LResks=∑i=nkcis1+i−∑i=0kϕcn,cn+1,…,cn+is1+n+i,s>0.$

Thus, the $n$th LRF is

$LResns=cns1+n−∑i=0kϕcn,cn+1,…,cn+is1+n+i,s>0.$

Now, we multiply Eq. 3.38 by $sn+1$ to get

$sn+1LResns=cn−∑i=0kϕcn,cn+1,…,cn+isi,s>0.$

Now, applying the limit as s → ∞ to both sides of Eq. 3.39 and using the fact in Eq. 3.10, we can easily determine the value of $cn$ by solving the following equation for $cn$:

$cn=ϕcn.$

In the same manner, we find the value of the coefficient $cn+1$ by substituting the $n+1$th-truncated series, $Yns=∑i=0n−1yis1+i+cns1+n$, into the $n+1$th LRF to get the following:

$LResn+1s=cn+1s2+n−∑i=0kϕcn,cn+1,…,cn+is2+n+i,s>0.$

Multiplying $sn+2$ by both sides of Eq. 3.41, we get the following function:

$sn+2LResn+1s=cn+1−∑i=1kϕcn,cn+1,…,cn+isi−1,s>0.$

Applying the limit at infinity to Eq. 3.42, we obtain the algebraic equation:

$cn+1=ϕcn+1.$

Solving Eq. 3.43 implicitly for $cn+1$ determines the second unknown coefficient in Eq. 3.6.

Similarly, we compute the third coefficient $cn+2$ by substituting the $n+2$th-truncated series, $Yn+2s=y0s+y1s2+y2s3+…+yn−1sn+cnsn+1+cn+1sn+2+cn+2sn+3$, into the $n+2$th LRF to get the following function:

$LResn+2s=cn+2sn+3−∑i=0kϕcn,cn+1,…,cn+is1+n+i,s>0.$

Multiplying Eq. 3.44 by $sn+3$ gives

$sn+3LResn+2s=cn+2−ϕcn−ϕcn+1−ϕcn+2−∑i=2kϕcn,cn+1,…,cn+isi−2,s>0.$

According to fact (3.10), we obtain

$cn+2=ϕcn+ϕcn+1+ϕcn+2.$

Solving Eq. 3.46 for $cn+2$ sets another coefficient in Eq. 3.6.

The value of the third unknown coefficient $cn+3$ can be obtained by similar arguments and by solving the following equation:

$cn+3=ϕcn+ϕcn+1+ϕcn+2+ϕcn+3.$

Considering the pattern of the obtained coefficients, we easily conclude the coefficient $cn+k$ from the following implicit formula of $cn+k$:

$cn+k=ϕcn+ϕcn+1+ϕcn+2+ϕcn+3+…+ϕcn+k.$

Thus, we can express the $k+1$)th-approximate solution of Eq. 3.33 in the following shape:

$Yn+ks=∑i=0n−1yisi+1+∑i=0kϕcn,cn+1,…,cn+is1+n+i,s>0,k=0,1,….$

Therefore, the exact analytic solution of Eq. 3.33 is written in a series form:

$Ys=∑i=0n−1yisi+1+∑i=0∞ϕcn,cn+1,…,cn+is1+n+i.$

Running the inverse LT to Eq. 3.50 gives the solution of Eq. 1.3 and Eq. 1.4 in a series expansion as

$yt=∑i=0n−1yii!ti+∑i=0∞ϕcn,cn+1,…,cn+ii+n!ti+n.$

### 3.3 The LRPS method for solving singular-value problems

This section presents the LRPS method’s procedure for handling singular-value problems. To do this, let us consider the following singular-value problem:

$1tkdnydtn=ft,yt,dydt,d2ydt2,…,dn−1ydtn−1,t∈I,m,k∈N.$

Subject to the ICs

$y0=y0,y′0=y1,…,yn−10=yn−1.$

To solve the initial-singular value problems (3.52) and (3.53), we first multiply Eq. 3.52 by $tk$ to get

$dnydtn=tkft,yt,dydt,d2ydt2,…,dn−1ydtn−1.$

Now applying LT to Eq. 3.54 and using ICs (3.53), we get

$Ys=∑i=0n−1yisi+1+1snLL−1k!sk+1 L−1Ψs,Ys,dYds,d2Yds2,…,dmYdsm,s>0.$

Now, suppose that the function Y(s) can be expressed in the form of the expansion of (3.4), and so on. We can complete the steps described in the previous Section 3.2 to obtain the required solution.

## 4 Applications to linear and non-linear problems

This section presents seven interesting problems with wide applications in physics and other sciences that are discussed and solved by the LRPS method.

Problem 4.1. consider the following composite oscillation equation:

$d2ydt2−adydt−byt=8,t≥0,$

with respect to the initial condition

$y0=0,y′0=0.$

Comparing Eq. 4.1 with Eq. 1.1 concludes that $a1t=a,a0t=b,$ and $gx=8.$ Using the results obtained in Section 3.1, we can deduce $λ10=a,λ11=λ12=λ13=…=0$ and $λ00=b,λ01=λ02=λ03=…=0$. According to the recurrence relation in Eq. 3.29, we can see that $c2=8$, $c3=8a$, $c4=8b+a2,c5=8a3+2ab$, $c6=8a4+3a2b+b2$, $c7=8a5+4a3b+3ab2$, $c8=8a6+5a4b+6a2b2+b3$, $c9=8a7+6a5b+10a3b2+4ab3$, and $c10=8a8+7a6b+15a4b2+10a2b3+b4$. Therefore, the 10th approximation of the solution of the IVP (4.1) and (4.2) will be as follows:

$y10t=82!t2+8a3!t3+8b+a24!t4+8a3+2ab5!t5+8a4+3a2b+b26!t6+8a5+4a3b+3ab27!t7+8a6+5a4b+6a2b2+b38!t8+8a7+6a5b+10a3b2+4ab3t99!+8a8+7a6b+15a4b2+10a2b3+b4t1010!.$

It is easy to check if the exact solution of Eq. 4.1 and Eq. 4.2 is as follows:

$yt=4bcc+ae12a−ct+c−ae12a+ct−8b,c=a2+4b.$

To analyze the accuracy of the approximate solution in Eq. 4.3 and determine the interval of convergence, we introduce and compute two types of error, actual and relative errors that are defined, respectively, as follows:

$Act. Err.t=yt−y10t$

and

$Rel. Err.t=yt−y10tyt.$

For analysis and comparison of the exact and approximate solutions of IVP (4.1) and (4.2), Table 1 shows the numerical results of this problem. It displays the exact and approximate results in addition to the actual and relative errors at different values of $t$ within the interval $0,1$. The results indicate that the errors increase when the value of t increases. It is known that by increasing the number of terms in the series solution, the error decreases and the convergence period of the truncated series increases. It should be noted that we can extend the convergence period using the multi-stage technique.

TABLE 1

TABLE 1. The exact and the $10$th approximate solutions of the IVP (4.1) and (4.2) and the actual and relative errors at $a=3$ and $b=−2$.

Problem 4.2. consider the following Bessel’s equation:

$1−t2y″t−2ty′t+2yt=0.$

Subject to the ICs,

$y0=1,y′0=−1.$

According to the existence and uniqueness theorem, it is clear that the IVPs (4.7) and (4.8) have a unique solution in the interval $−1,1$, so we seek to get this solution via the LRPS method. To reach our goal and be able to rely on the construction obtained in Section 3.1, it is necessary to rewrite Eq. 4.7 as follows:

$y″t=2t1−t2y′t−21−t2yt,0≤t<1.$

Comparing Eq. 4.7 with Eq. 1.1, we find that

$Lt=2t1−t2ddt−21−t2,a1t=2t1−t2,a0t=−21−t2,$

where $a0t$ and $a1t$ are analytic functions on $0,1)$.Since the coefficients of the linear operator in Eq. 4.10 are analytic functions, they can be expressed as McLaurin expansions as follows:

$a0t=∑j=0∞−2t2j,a1t=∑j=0∞2t2j+1.$

So, according to Eq. 4.11 and Eq. 3.14, we have

$λ02j=a02j02j!=−2,λ02j+1=a02j+102j+1!=0,j=0,1,2,…λ12j=a12j02j!=0,λ12j+1=a12j+102j+1!=2,j=0,1,2,….$

Comparing with the general formula (3.29), we can find the values of the coefficients as follows:

$c2=λ00y0+λ10y1=−2,c3=c2 λ10+λ01y0+λ11y1+λ00y1=0,c4=c3 λ10+c2λ00+2!c2 λ11+2λ02y0+λ12y1+λ01y1=−8,c5=c4 λ10+c3 λ00+3c3 λ11+3c2 λ01+6c2 λ12+6λ03y0+λ13y1+λ02y1=0c6=c5 λ10+c4 λ00+4c4 λ11+4c3 λ01+12c3 λ12+12c2 λ02+24c2 λ13+24λ04y0+λ14y1+λ03y1=−144c7=c6 λ10+c5 λ00+5c5 λ11+5c4 λ01+20c4λ12+20c3λ02+60c3λ13+60c2λ03+120c2λ14+120λ04y1+λ05y0+λ15y1=0c8=c7λ10+c6λ00+6c6λ11+6c5λ01+30c5λ12+30c4λ02+120c4λ13+120c3λ03+360c3λ14+360c2λ04+720c2λ15+720λ05y1+λ06y0+λ16y1=−5760c9=c7λ00+7c6λ01+42c5λ02+210c4λ03+840c3λ04+2520c2λ05+c8λ10+7c7λ11+42c6λ12+210c5λ13+840c4λ14+2520c3λ15+5040c2λ16+5040λ07y0+λ17y1+λ06y1=0c10=c8λ00+8c7λ01+56c6λ02+336c5λ03+1680c4λ04+6720c3λ05+20160c2λ06+c9λ10+8c8λ11+56c7λ12+336c6λ13+1680c5λ14+6720c4λ15+20160c3λ16+40320c2λ17+40320λ08y0+λ18y1+λ07y1=−403200.$

Therefore, the LRPS solution to Problem 4.2 can be expressed in the following series form:

$yt=1−t−tt+t33+t55+t77+t99+….$

The expansion in (4.13) is the same expansion as that of the function $tan−1t$. Therefore, the exact solution of the IVP (4.7) and (4.8) has the following closed form:

$yt=1−t−ttan−1t,0≤t<1.$

Table 2 shows the numerical results of Problem 4.2. It shows the exact and approximate results in addition to the actual and relative errors at different values of $t∈0,09$. The displayed data are acceptable and can be improved by increasing the order of the approximation.

TABLE 2

TABLE 2. The exact and the $10$th approximate solutions of the IVP (4.7) and (4.8) and the actual and relative errors.

Problem 4.3. consider the following non-linear nonhomogeneous ODE:

$y3t+y2t+cos⁡ty′t=1−cos⁡t.$

Subject to the ICs,

$y0=0,y′0=1,y″0=0.$

Similar to the previous problems, we operate the LT on both sides of Eq. 4.15 and employ the ICs (4.16). Then, we obtain the following equation in the Laplace space:

$Ys=1s2−1s3LL−1Ys2−1s3LL−1s1+s2L−1sYs+1s4−1s21+s2,s>0.$

We assume that the solution of Eq. 4.17 has the same LS expansion as in Eq. 3.4. According to ICs (4.16), the $k$th-truncated series of $Ys$ becomes

$Yks=1s2+∑i=3kcis1+i,s>0.$

To set the value of the unknown coefficients in series (4.18), we utilize the kth LRF of Eq. 3.17, which is defined as

$LResks=Yks−1s2+1s3LL−1Yks2+1s3LL−1s1+s2L−1sYks−1s4+1s21+s2,s>0.$

To determine the coefficient $c3$, we substitute $Y3s=1s2+c3s4$ into $LRes3s$ and run the operators in Eq. 4.19 to get the following rational function:

$LRes3s=c3s4−1s4+2s21+s2+2s6+8c3s8+c31+s23−3c3s21+s23+20c32s10.$

Employing fact (3.10), the solution of the equation $lims→∞s4LRes3s=0$ for $c3$ introduces $c3=−1.$Similarly, to find out the value of the second unknown coefficient $c4$, we substitute $Y4s=1s2+1s4+c4s5$ into the 4th-LRF to get the following:

$LRes4s=c3s5+2s6−8s8+20s10−11+s23+3s21+s23−2s4+s6−70c3s11+10c3s9+c3s31+s24−6c3s1+s24+sc31+s24+70c32s12,s>0.$

Utilizing fact (3.10) via Eq. 4.21 gives $c4=0$. Using the same procedure as mentioned above, we can find more coefficients for series (4.18). Some of them are $c5=1,c6=0,c7=−1,c8=0,c9=1,c10=0,c11=−1.$ So, the series solution to Eq. 4.31 has the following LS:

$Ys=1s2−1s4+1s6−1s8+1s10−1s12+….$

Therefore, the LRPS solution to Eq. 4.15 and Eq. 4.16 can be expressed in the following series form:

$yt=t−t33!+t55!−t77!+t99!−t1111!+…,$

which is the expansion of the exact solution $yt=sint.$

Problem 4.4. consider the following non-linear pantograph equation:

$d2ydt2=−2y2t2,t≥0.$

Subject to the ICs,

$y0=1,y′0=0.$

Following the same procedure as in the previous problems, we can express the LRPS solution to IVP (4.24) and (4.25) in the form

$yt=1−t2+t412−7t61440+127t81290240−10879t107431782400+….$

Since we cannot predict the pattern in the coefficients of the series solution in Eq. 4.26, we cannot reach the exact solution. Therefore, we test the results using the residual and relative errors, which are defined as follows, respectively:

$Res. Err.t=L−1LResks=d2ykdt2+2yk2t2,$
$Rel. Err.t=ykt−yk/2tykt.$

Table 3 shows the numerical results of Problem 4.4. It displays the 10th approximate solution in addition to the residual and relative errors at different values of $t$ within the interval $0,1$. The results indicate that the LRPS solution is acceptable mathematically in the period $0,1.$

TABLE 3

TABLE 3. The $10$th approximate LRPS solution of the IVP (4.24) and (4.25) and the residual and relative errors.

Problem 4.5. consider the following homogenous linear singular ODE:

$sin⁡ty″−2cos⁡ty′−sin⁡ty=0,0

with respect to the ICs:

$y0=2,y′0=0.$

We apply the LT on both sides of Eq. 4.29 and use the ICs in Eq. 4.30 to obtain the following symbolic algebraic equation in the Laplace space:

$LL−111+s2L−1s2Ys−2s−2LL−111+s2L−1sYs−2−LL−111+s2L−1Ys=0,s>0.$

Suppose that the solution of Eq. 4.31 has a LS expansion as in Eq. 3.4. According to ICs (4.30), the $k$th-truncated series (3.6) can be expressed as

$Yks=2s+∑i=2kcis1+i,s>0.$

To set the unknown coefficient in series (4.32), we define the $k$th LRF of Eq. 4.31 as follows:

$LResks=LL−111+s2L−1s2Yks−2s−2LL−111+s2L−1sYks−2−LL−111+s2L−1Yks,s>0.$

We substitute $Y2s=2s+c2s3$ into $LRes2s$ and run the operators in Eq. 4.34 to get the following function:

$LRes2s=−21+s23+4c21+s23−4s21+s23−c2s21+s23−2s41+s23−c2s41+s23.$

Solving the equation $lims→∞s2LRes2s=0$ gives $c2=−2$. Thus, the first approximation of the solution of Eq. 4.31 is $Y2s=2s−2s3$.Again, we substitute 3rd-truncated series, $Y3s=2s−2s3+c3s4$, into the 3rd LRF to get the following:

$LRes3s=−101+s24+12c3s1+s24−12s21+s24+4c3s31+s24−2s41+s24.$

Consequently, the equation $lims→∞s3LRes3s=0$ gives $c3=0$.Likewise, we substitute the 4th-truncated series, $Y4s=2s−2s3+c4s5$, into the 4th LRF to get the following:

$LRes4s=−101+s25−4c41+s25−22s21+s25+21c4s21+s25−14s41+s25+10c4s41+s25−2s61+s25+c4s61+s25.$

Solving the equation $lims→∞s4LRes4s=0$ gives $c4=2$. Applying the same procedure for $k=5,6,7,8$ leads to $c5=0$, $c6=−2$, $c7=0$, and $c8=2$. Thus, we conclude that the solution of Eq. 4.31 has the following expansion:

$Ys=2s−2s3+2s5−2s7+2s9−….$

Applying the inverse LT to Eq. 4.38 gives the LRPS solution to the IVP (4.29) and (4.30) in the following PS form:

$yt=21−t22!+t44!−t66!+t88!−….$

It is clear that the closed form of the exact solution of IVP (4.50) and (4.51) is $yt=2⁡cost$.

Problem 4.6. consider the following non-homogeneous nonlinear Lane–Emden singular ODE:

$y″t+2ty′t−t⁡sin⁡y⁡t=e2t,t∈0,2,$

considering the ICs:

$y0=π,y′0=0.$

Using similar arguments to the previous problem, one can obtain the series solution of the LT of the IVP (4.40) and (4.41) as follows:

$Ys=πs+13s3+1s4+125s5+143s6+607s7+15s8+28s9+258845s10+….$

Applying the inverse LT on Eq. 4.42 gives the LRPS solution of the IVP (4.40) (4.41) in the following series form:

$yt=π+t26+t36+t410+7t5180+t684+t7336+t81440+647t94082400+….$

There is no pattern between the series terms in Eq. 4.43. So, it is difficult to predict the exact solution formula. Thus, we suffice with the approximate solution we got for Problem 4.6. It is worth noting that the more terms we calculate for the LRPS solution, the longer the series convergence interval and the higher the accuracy of the solution. Therefore, we test the 8th approximate LRPS solution for Problem 4.6 using the residual and relative errors, which are defined as follows, respectively:

$Res. Err.t=L−1LRes8s=t−ⅇ2tt+2t2+2t3+4t43+2t53+4t615+4t745+8t8315+t91512−13t104320−1003t11255150−113221t1232659200,$
$Rel. Err.t=y8t−y4ty8t=t684+t7336+t81440+647t94082400π+t26+t36+t410+7t5180+t684+t7336+t81440+647t94082400.$

Table 4 shows the numerical results of Problem 4.6. It illustrates the 8th approximate solution in addition to the residual and relative errors at different values of $t$ within the interval $0,1$. Similar to the results in the previous tables, the data are good for the period $0,1$].

TABLE 4

TABLE 4. The $8$th approximate LRPS solution of the IVP (4.40) and (4.41) and the residual and relative errors.

Problem 4.7. We consider the following micro-electromechanical system (MEMS) [52]:

$y″+y+θy−1=0,θ>0,$

with the ICs:

$y0=y′0=0.$

This dynamic differential equation is used to describe the wire’s movement as a point mass, where $y$ is the dimensionless distance and $θ$ is a voltage-related parameter.

Simulating the previous examples, the LT of the IVP (4.46) and (4.47) is given by the following algebraic equation:

$LL−1s2YsL−1Ys−s2Ys+LL−1Ys2−Ys+1s2=0,s>0.$

Applying the arguments and processes of the LRPS method, one can obtain the LRPS solution to the algebraic Eq. 4.48 as follows:

$Ys=θs3+θθ−1s5+θ1−2θ+7θ2s7−θ1−3θ+39θ2−127θ3s9+θ1−4θ+168θ2−1678θ3+4369θ4s11+….$

Applying the inverse LT to Eq. 4.49 gives the LRPS solution of the IVP (4.46) (4.47) as follows:

$yt=θt22+θθ−1t44!+θ1−2θ+7θ2t66!−θ1−3θ+39θ2−127θ3t88!+θ1−4θ+168θ2−1678θ3+4369θ4t1010!+….$

To test the accuracy of the obtained solution given in (4.50), we compute the residual and relative errors to the 10th approximation of the solution. Table 5 shows the 10th approximate solution in addition to the residual and relative errors at different values of $t$ within the interval $0,1$. The results indicate that the obtained solution is acceptable mathematically.

TABLE 5

TABLE 5. The $10$th approximate LRPS solution of the IVP (4.46) and (4.47) and the residual and relative errors.

On the other hand, what specialists in MEMS system implementations are most interested in is the pull-in phenomenon analysis. The MEMS system in Eq. 4.46 and Eq. 4.47 conducts either periodically or unsteadily. This behavior depends on the value of the voltage-related parameter, $θ$. At small values of $θ$, the solution of the system is stable and periodic, whereas at large values of $θ$, it becomes unstable, called pull-in instability. Figure 1 shows that the system is stable and periodic at $θ$ values less than or equal to the critical value ($θ=0.203632188$) [52], and it becomes unstable at $θ$ values greater than the critical one.

FIGURE 1

FIGURE 1. Phase trajectories at different values of $θ$. (A) Solid line: $θ=0.2$; dotted line: $θ=0.203$; and dashed line: $θ=0.203632188$. (B) Solid line: $θ=0.2037$; dotted line: $θ=0.25$; and dashed line: $θ=0.3$.

## 5 Conclusion

This study aims to test the efficiency of the LRPS method in finding series solutions for ODEs, which are difficult to solve in the analytical methods. We have succeeded in providing a solution to the general form of linear ODEs whose coefficients are analytical functions as an exact solution in a PS form. We also dealt with non-linear ODEs in the proposed technique and found approximate solutions with high accuracy. The biggest surprise is the success of the LRPS method in providing series solutions for the equations about the singular points that coincide with the exact results in some examples. Using the LRPS method, there is no longer an obstacle to obtaining a PS solution for a broad class of ODEs. In addition, the idea of the method circumvented the use of the LT to solve non-linear equations to which the LT is difficult to apply. In addition to the method’s efficiency in arriving at exact solutions, LRPS is easy and fast in finding the coefficients of a series solution. There is no doubt that we can use the new method to solve other sets of equations that we did not have to deal with in previous studies, such as using it to solve partial differential equations, integral equations, integrodifferential equations, and linear or non-linear, as well as algebraic equations. We should not forget that the method has not been applied to solve differential equations with boundary conditions. All these and other topics will be under research by our research team in the next stage.

## Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.

## Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

## Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

## Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

## References

1. King A, Billingham J, Otto S. Differential equations: Linear, nonlinear, ordinary and partial. Cambridge: Cambridge University Press (2003).

2. Huntley E, Pickering WM, Zinober ASI. The numerical solution of linear time-dependent partial differential equations by the Laplace and fast Fourier transforms. J Comput Phys (1978) 27(2):256–71. doi:10.1016/0021-9991(78)90008-6

3. Adomian G. A review of the decomposition method and some recent results for nonlinear equations. Comput Maths Appl (1991) 21(5):101–27. doi:10.1016/0898-1221(91)90220-x

4. Evans DJ, Raslan KR. The Adomian decomposition method for solving delay differential equation. Int J Comput Maths (2005) 82(1):49–54. doi:10.1080/00207160412331286815

5. Ibijola EA, Adegboyegun BJ, Halid OY. On Adomian Decomposition Method (ADM) for numerical solution of ordinary differential equations. Adv Nat Appl Sci (2008) 2(3):165–70.

6. He JH. Approximate solution of nonlinear differential equations with convolution product nonlinearities. Comput Methods Appl Mech Eng (1998) 167(1-2):69–73. doi:10.1016/s0045-7825(98)00109-1

7. He JH. Variational iteration method–a kind of non-linear analytical technique: Some examples. Int J non-linear Mech (1999) 34(4):699–708. doi:10.1016/s0020-7462(98)00048-1

8. Abbasbandy S. Numerical solution of non-linear Klein–Gordon equations by variational iteration method. Int J Numer Methods Eng (2007) 70(7):876–81. doi:10.1002/nme.1924

9. Anjum N, He JH, Ain QT, Tian D. Li-He’s modified homotopy perturbation method for doubly-clamped electrically actuated microbeams-based microelectromechanical system. Facta Universitatis, Ser Mech Eng (2021) 19(4):601–12. ‏. doi:10.22190/fume210112025a

10. Anjum N, He JH. Homotopy perturbation method for N/MEMS oscillators. In: Mathematical methods in the applied sciences (2020). ‏.

11. Saadeh R. Numerical algorithm to solve a coupled system of fractional order using a novel reproducing kernel method. Alexandria Eng J (2021) 60(5):4583–91. doi:10.1016/j.aej.2021.03.033

12. Liao W, Zhu J, Khaliq AQ. An efficient high-order algorithm for solving systems of reaction-diffusion equations. Numer Methods Partial Differential Equations (2002) 18(3):340–54. doi:10.1002/num.10012

13. Abbasbandy S. Soliton solutions for the Fitzhugh–Nagumo equation with the homotopy analysis method. Appl Math Model (2008) 32(12):2706–14. doi:10.1016/j.apm.2007.09.019

14. Pukhov G. Taylor transforms and their application in electrical engineering and electronics (Russian book). Kiev: Izdatel'stvo Naukova Dumka (1978). p. 259. p, In Russian.

15. Pukhov GE. Differential transformations of functions and equations. Kiev: Naukova Dumka (1980). p. 54–7. (in Russian).

16. Abbasov TEYMURAZ, Bahadir AR. The investigation of the transient regimes in the nonlinear systems by the generalized classical method. Math Probl Eng (2005) 2005(5):503–19. doi:10.1155/mpe.2005.503

17. Chawla M. A fourth-order tridiagonal finite difference method for general non-linear two-point boundary value problems with mixed boundary conditions. IMA J Appl Maths (1978) 21(1):83–93. doi:10.1093/imamat/21.1.83

18. Ascher UM, Ruuth SJ, Wetton BT. Implicit-explicit methods for time-dependent partial differential equations. SIAM J Numer Anal (1995) 32(3):797–823. doi:10.1137/0732037

19. Burrage K, Tian T. Predictor-corrector methods of runge--kutta type for stochastic differential equations. SIAM J Numer Anal (2002) 40(4):1516–37. doi:10.1137/s0036142900372677

20. Feng Z, Knobel R. Traveling waves to a Burgers–Korteweg–de Vries-type equation with higher-order nonlinearities. J Math Anal Appl (2007) 328(2):1435–50. doi:10.1016/j.jmaa.2006.05.085

21. Eslami M, Fathi Vajargah B, Mirzazadeh M, Biswas A. Application of first integral method to fractional partial differential equations. Indian J Phys (2014) 88(2):177–84. doi:10.1007/s12648-013-0401-6

22. Ascher UM, Petzold LR. Computer methods for ordinary differential equations and differential-algebraic equations. Philadelphia, PA: Society for Industrial and Applied Mathematics (1998). p. 61.

23. Ostrowski AM. Solution of equations and systems of equations: Pure and applied mathematics: A series of monographs and textbooks, vol. 9. Amsterdam: Elsevier (2016).

24. Hueso JL, Martínez E, Teruel C. Multipoint efficient iterative methods and the dynamics of Ostrowski's method. Int J Comput Maths (2019) 96(9):1687–701. doi:10.1080/00207160.2015.1080354

25. Li D, Zhang C. Nonlinear stability of discontinuous Galerkin methods for delay differential equations. Appl Maths Lett (2010) 23(4):457–61. doi:10.1016/j.aml.2009.12.003

26. Zaibin Z, Zhizhong S. A Crank-Nicolson scheme for a class of delay nonlinear parabolic differential equations. J Numerica Methods Comput Appl (2010) 31(2):131.

27. Zhou Y, Cui M, Lin Y. Numerical algorithm for parabolic problems with non-classical conditions. J Comput Appl Maths (2009) 230(2):770–80. doi:10.1016/j.cam.2009.01.012

28. Geng F, Cui M. A reproducing kernel method for solving nonlocal fractional boundary value problems. Appl Maths Lett (2012) 25(5):818–23. doi:10.1016/j.aml.2011.10.025

29. Khuri SA. A Laplace decomposition algorithm applied to a class of nonlinear differential equations. J Appl Math (2001) 1(4):141–55. doi:10.1155/s1110757x01000183

30. Kiymaz O. An algorithm for solving initial value problems using Laplace Adomian decomposition method. Appl Math Sci (2009) 3:1453–9.

31. Nadeem M, He JH. He–Laplace variational iteration method for solving the nonlinear equations arising in chemical kinetics and population dynamics. J Math Chem (2021) 59:1234–45. doi:10.1007/s10910-021-01236-4

32. Rehman S, Hussain A, Rahman JU, Anjum N, Munir T. Modified Laplace based variational iteration method for the mechanical vibrations and its applications. acta mechanica et automatica (2022) 16(2):98–102. ‏. doi:10.2478/ama-2022-0012

33. Anjum N, He JH. Laplace transform: Making the variational iteration method easier. Appl Maths Lett (2019) 92:134–8. ‏. doi:10.1016/j.aml.2019.01.016

34. He K, Nadeem M, Habib S, Sedighi HM, Huang D. Analytical approach for the temperature distribution in the casting-mould heterogeneous system. Int J Numer Methods Heat Fluid flow (2022) 32(3):1168–82. ‏. doi:10.1108/hff-03-2021-0180

35. Fang J, Nadeem M, Habib M, Karim S, Wahash HA. A new iterative method for the approximate solution of klein-gordon and sine-gordon equations. J Funct Spaces (2022) 2022:1–9. doi:10.1155/2022/5365810‏

36. Anjum N, Suleman M, Lu D, He JH, Ramzan M. Numerical iteration for nonlinear oscillators by Elzaki transform. J Low Frequency Noise, Vibration Active Control (2020) 39(4):879–84. ‏. doi:10.1177/1461348419873470

37. Eriqat T, El-Ajou A, Oqielat MN, Al-Zhour Z, Momani S. A new attractive analytic approach for solutions of linear and nonlinear neutral fractional pantograph equations. Chaos, Solitons and Fractals (2020) 138:109957. doi:10.1016/j.chaos.2020.109957

38. El-Ajou A. Adapting the Laplace transform to create solitary solutions for the nonlinear time-fractional dispersive PDEs via a new approach. The Eur Phys J Plus (2021) 136(2):229–2. doi:10.1140/epjp/s13360-020-01061-9

39. El-Ajou A, Al-Zhour Z. A vector series solution for a class of hyperbolic system of Caputo time-fractional partial differential equations with variable coefficients. Front Phys (2021) 9:525250. doi:10.3389/fphy.2021.525250

40. Burqan A, El-Ajou A, Saadeh R, Al-Smadi M. A new efficient technique using Laplace transforms and smooth expansions to construct a series solution to the time-fractional Navier-Stokes equations. Alexandria Eng J (2022) 61(2):1069–77. doi:10.1016/j.aej.2021.07.020

41. Oqielat MN, El-Ajou A, Al-Zhour Z, Eriqat T, Al-Smadi M. A new approach to solving fuzzy quadratic Riccati differential equations. Int J Fuzzy Logic Intell Syst (2022) 22(1):23–47. ‏. doi:10.5391/ijfis.2022.22.1.23

42. Saadeh R, Burqan A, El-Ajou A. Reliable solutions to fractional Lane-Emden equations via Laplace transform and residual error function. Alexandria Eng J (2022) 61(12):10551–62. doi:10.1016/j.aej.2022.04.004

43. Salah E, Qazza A, Saadeh R, El-Ajou A. A hybrid analytical technique for solving multi-dimensional time-fractional Navier-Stokes system. AIMS Maths (2023) 8(1):1713–36. doi:10.3934/math.2023088

44. Alquran M, Ali M, Alshboul O. Explicit solutions to the time-fractional generalized dissipative Kawahara equation. J Ocean Eng Sci (2022) 1–5. doi:10.1016/j.joes.2022.02.013

45. Eriqat T, Oqielat MN, Al-Zhour Z, Khammash G, El-Ajou A, Alrabaiah H. Exact and numerical solutions of higher-order fractional partial differential equations: A new analytical method and some applications‏. Pramana (2022) 96(4):207‏. doi:10.1007/s12043-022-02446-4

46. Saadeh R, Ala’yed O, Qazza A. Analytical solution of coupled hirota–satsuma and KdV equations. Fractal and Fractional (2022) 6(12), 694. ‏doi:10.3390/fractalfract6120694

47. Saadeh R, Qazza A, Amawi K. A new approach using integral transform to solve cancer models. Fractal and Fractional (2022) 6(9), 490.‏doi:10.3390/fractalfract6090490

48. El-Ajou A, Oqielat MN, Al-Zhour Z, Momani S. A class of linear non-homogenous higher order matrix fractional differential equations: Analytical solutions and new technique‏. Fractional Calculus Appl Anal (2020) 23(2), 356–77. doi:10.1515/fca-2020-0017

49. Eriqat T, Oqielat MN, Al-Zhour Z, El-Ajou A, Bataineh A. Revisited Fisher’s equation and logistic system model: A new fractional approach and some modifications‏. Int J Dyn Control (2022) 11, 555–63. doi:10.1007/s40435-022-01020-5

50. Nagle RK, Saff EB, Snider AD. Fundamentals of differential equations. United States: Pearson (2018).

51. Zill DG, Shanahan PD. A first course in complex analysis with applications. London: Jones & Bartlett Learning (2013).

52. Feng GQ, Niu JY. The analysis for the dynamic pull-in of a micro-electromechanical system‏. J Low Frequency Noise, Vibration Active Control (2022) 146134842211455. doi:10.1177/14613484221145588

Keywords: ordinary differential equations, Laplace transforms, power series, approximate solutions, laurent series

Citation: El-Ajou A, Al-ghananeem H, Saadeh R, Qazza A and Oqielat MN (2023) A modern analytic method to solve singular and non-singular linear and non-linear differential equations. Front. Phys. 11:1167797. doi: 10.3389/fphy.2023.1167797

Received: 16 February 2023; Accepted: 23 March 2023;
Published: 17 April 2023.

Edited by:

Ji-Huan He, Soochow University, China

Reviewed by: