Next Article in Journal
Parameters Estimation in Non-Negative Integer-Valued Time Series: Approach Based on Probability Generating Functions
Previous Article in Journal
Editorial for the Special Issue of Axioms “Calculus of Variations, Optimal Control and Mathematical Biology: A Themed Issue Dedicated to Professor Delfim F. M. Torres on the Occasion of His 50th Birthday”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Direct Power Series Approach for Solving Nonlinear Initial Value Problems

1
Department of Mathematics, Zarqa University, Zarqa 13110, Jordan
2
Department of Mathematics, Jadara University, Irbid 21111, Jordan
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(2), 111; https://doi.org/10.3390/axioms12020111
Submission received: 1 December 2022 / Revised: 27 December 2022 / Accepted: 28 December 2022 / Published: 20 January 2023

Abstract

:
In this research, a new approach for solving fractional initial value problems is presented. The main goal of this study focuses on establishing direct formulas to find the coefficients of approximate series solutions of target problems. The new method provides analytical series solutions for both fractional and ordinary differential equations or systems directly, without complicated computations. To show the reliability and efficiency of the presented technique, interesting examples of systems and fractional linear and nonlinear differential equations of ordinary and fractional orders are presented and solved directly by the new approach. This new method is faster and better than other analytical methods in establishing many terms of analytic solutions. The main motivation of this work is to introduce general new formulas that express the series solutions of some types of differential equations in a simple way and with less calculations compared to other numerical power series methods, that is, there is no need for differentiation, discretization, or taking limits while constructing the approximate solution.

1. Introduction

Fractional calculus is a generalization of classical calculus that studies integrals and derivatives of a non-integer order. Many definitions of FC have been introduced by scientists such as Leibniz, Liouville, Riemann, Grunwald, and Michele Caputo. More details on FC theories and applications can be found in [1,2,3,4,5,6]. Recently, many studies have been published that use the fractional derivative in modeling and analyzing new problems in various branches of science such as mathematical physics and engineering [7,8,9].
There are different methods for solving fractional differential equations. One of them is the residual power series method (RPSM), an efficient analytical technique for treating different types of fractional differential equations and systems. It is based on the generalized Taylor series formula and uses the power series for the functions, which gives an analytical series solution in the form of a convergent series [10,11,12,13,14]. On the other hand, researchers have invented many new methods based on the RPSM, and some of them combine the Laplace transform with the RPSM to obtain the Laplace residual power series method, while others use the Laplace transform with the idea of a power series and others, as in [15,16,17,18,19,20]. Recently, new transformations have been used to solve fractional differential equations, as in [20,21,22,23,24,25,26,27,28]. All these methods provide analytical series solutions to initial value problems, but the techniques differ between them [29,30,31].
In this article, we introduce the direct power series method (DPSM), which is a new numerical method that does not require integral transformations or differentiation. This new technique solves linear and nonlinear ordinary and fractional differential equations. It saves researchers effort and time in determining the coefficients of the analytical series expansions obtained through their work. It provides reliable and efficient solutions compared to other methods. This method depends on simple substitutions made on a target equation or system of differential equations that directly compute the coefficients of the power series expansion for the unknown functions. Furthermore, the substitutions in this article solve some time-fractional differential equations or systems that can be solved by power series methods. In addition, researchers can discover a new mathematical playground with this method.
In literature, there are many analytical and approximate methods for solving fractional differential equations, such as [32,33,34]. All of these techniques require long computations and efforts. With DPSM, we do not require these complicated operations, such as differentiation, discretization, and others, and we can find a general term of the solution directly with simple replacements. The novelty of this work is precisely obvious in the proposed method and its speediness and simplicity in presenting analytical series solutions that converge rapidly to the exact ones. The strength of DPSM is its ability to be programmed and provide many terms of a series solution in comparison to other numerical methods. DPSM can help researchers study the series solutions of new equations and systems with less time and effort spent computing approximate solutions using other methods.
In addition, many illustrative examples have been solved by DPSM. These examples have previously been solved by other numerical methods in the literature, and we achieve the same results faster. Further, other methods were too difficult to apply or took a long time to arrive at the solution; however, with DPSM, the steps to identify the solutions are fully shown and the calculations with software are even easier than the calculations for other power series methods.
This article is structured as follows: The second section illustrates preliminary information about FC. The main theorem of this research, and the steps of DPSM, are presented in section three. Finally, numerical examples and figures are presented in the fourth section.

2. Preliminaries

In this section, important definitions and properties of FC are presented. In addition, definitions and theorems of the fractional power series are introduced, which can be found in [10,11,12,13,14,15].
Definition 1. 
The Caputo fractional derivative operator of order α of the function   y ( t ) , is defined by
D α [ y ( t ) ] = { 1 Γ ( n α ) 0 t y ( m ) ( ξ ) ( t ξ ) α + 1 m d ξ ,       m 1 < α < m ,     m N . d m d t m y ( t ) ,       α = m .
Lemma 1. 
The following properties are important for Caputo definition
i. D α c = 0 ( c is an arbitrary constant), m 1 < α m , m .
ii. D α t β = Γ ( β + 1 ) Γ ( β + 1 α ) t β α , m 1 < α < m ,   β > m 1 ,   m ,   n ,   β .
iii. D k α t n α = { Γ ( n α + 1 ) Γ ( ( n k ) α + 1 ) t ( n k ) α ,   n k 0                                                   ,       n < k , m 1 < α < m ,             k , m , n .
iv.   D k α ( n = 0   a n ( γ t ) n α Γ ( n α + 1 ) ) = n = 0 γ ( n + k ) α a n + k t n α Γ ( n α + 1 ) ,   m 1 < α < m ,     m , k , n .
The following two theorems introduce a fractional power series representation and some related convergence analysis.
Theorem 1 [11]. 
Suppose that  y ( t ) has a fractional power series representation at  t 0 = 0 , of the form
y ( t ) = n = 0 a n t n α Γ ( n α + 1 ) ,  
where m 1 < α m and 0 t < R , m , and R , where R is the radius of convergence. If D n α y ( t ) is continuous on ( 0 ,   R ) , then the coefficients of the equation a n and n = 0 ,   1 ,   are given by:
a n = D t n α y ( 0 ) .
Theorem 2 [11]. 
There are three cases for the convergence of the fractional power series  n = 0 a n t n α Γ ( n α + 1 ) as bellow
  • For t = 0 , the series converges and the radius of convergence is equal to zero.
  • For all t 0 , the series converges and the radius of convergence is equal to .
  • For 0 t R , the series converges, and for t > R , the series diverges where R is a positive real number. Herein, R is radius of convergence for the fractional power series.

3. Algorithm of DPSM

Numerical methods, such as the power series methods, are approximate techniques used to find numerical solutions to difficult problems such as nonlinear differential equations. The main idea of the power series approach, depends on expressing the solution in a form of an infinite series, and obtaining the series coefficients. Hence, to obtain efficient approximate solutions, we should find many terms (coefficients) of the series expansion.
In this section, we present a new technique, DPSM, which could be used to solve linear and nonlinear differential equations of ordinary or fractional order and systems. We present the main theorem in this new approach, that states the replacements that should be made in using the DPSM.

3.1. Main Theorem in DPSM

The following theorem presents the main theorem of DPSM in which we introduce the main replacements that will be applied to the target equations to apply the new method.
Theorem 3. 
Suppose that  y ( t ) and  f ( t ) have the fractional power series representations, such as
y ( t ) = n = 0 a n t n α Γ ( n α + 1 )     and   f ( t ) = m = 0 b m t m α Γ ( m α + 1 ) ,
where y ( t ) and f ( t ) are analytical functions, and a n , b m are the coefficients of t n α Γ ( n α + 1 ) and t m α Γ ( m α + 1 ) for the functions y ( t ) and f ( t ) , respectively. Then, n = 0 ,   1 , and m = 0 , 1 , 2 , , and then we can determine:
(i) a n + k is the coefficient of t n α Γ ( n α + 1 ) in the series expansion of D k α y ( t ) for any k = 0 ,   1 ,   .
(ii)   γ ( n + k ) α a n + k is the coefficient of t n α Γ ( n α + 1 ) in the series expansion of D k α y ( γ t ) for any k = 0 ,   1 ,     and γ .
(iii) i = 0 n a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) is   the   coefficient   of   t n α Γ ( n α + 1 )   in   the   series  
expansion   of   y ( t ) f ( t ) .
(iv) i = 0 n β i α γ ( n i ) α a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) is   the   coefficient   of   t n α Γ ( n α + 1 )   in   the   series  
expansion   of   y ( β t ) f ( γ t ) ,   where   β   and   γ .
(v) i = 0 n β ( i + k ) α γ ( n i + m ) α a i + k b n i + m Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) is   the   coefficient   of t n α Γ ( n α + 1 )   in   the
series   expansion   of   D k α y ( β t ) D m α f ( γ t ) ,   where   β , γ ,     and   k = 0 ,   1 ,   .
Proof of (i). 
Substituting the series expansion of y ( t ) into D k α y ( t ) , we obtain
D k α y ( t ) = D k α ( n = 0 a n t n α Γ ( n α + 1 ) ) .
Using part (iv) of Lemma 1, we obtain
D k α y ( t ) = n = k a n t ( n k ) α Γ ( ( n k ) α + 1 ) .
Thus, the series expansion (4) can be written as
D k α y ( t ) = n = 0 a n + k t n α Γ ( n α + 1 ) .
Proof of (ii). 
Substituting the series expansion of y ( γ t ) into D k α y ( γ t ) , we obtain
D k α y ( γ t ) = D k α ( n = 0 a n ( γ t ) n α Γ ( n α + 1 ) ) .
Using part (iv) of Lemma 1, we obtain
D k α y ( γ t ) = n = k γ n α a n t ( n k ) α Γ ( ( n k ) α + 1 ) .
Thus, Equation (5) can be written as
D k α y ( γ t ) = n = 0 γ ( n + k ) α a n + k t n α Γ ( n α + 1 ) . Ψ
Proof of (iii). 
Multiplying the series expansion of y ( t )   and   f ( t ) , we obtain
y ( t ) f ( t ) = ( i = 0 a i t i α Γ ( i α + 1 ) ) ( j = 0 b j t j α Γ ( j α + 1 ) ) .
Equation (6) can be simplified as
y ( t ) f ( t ) = i = 0 j = 0 a i b j t ( i + j ) α Γ ( i α + 1 ) Γ ( j α + 1 ) ,
which can be rewritten as
y ( t ) f ( t ) = n = 0 ( i = 0 n a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ) t n α Γ ( n α + 1 ) . Ψ
Proof of (iv). 
Multiplying the series expansion of y ( β t )   and   f ( γ t ) , we obtain
y ( β t ) f ( γ t ) = ( i = 0 a i ( β t ) i α Γ ( i α + 1 ) ) ( j = 0 b j ( γ t ) j α Γ ( j α + 1 ) ) .
Equation (7) can be written as
y ( β t ) f ( γ t ) = i = 0 j = 0 a i b j β i α γ j α t ( i + j ) α Γ ( i α + 1 ) Γ ( j α + 1 ) ,
which can be simplified as
y ( β t ) f ( γ t ) = n = 0 ( i = 0 n β i α γ ( n i ) α a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ) t n α Γ ( n α + 1 ) . Ψ
Proof of (v). 
Substituting the series expansion of y ( β t )   and   f ( γ t ) into D k α y ( β t )   D m α f ( γ t ) , we obtain
D k α y ( β t ) D m α f ( γ t ) = D k α ( i = 0 a i ( β t ) i α Γ ( i α + 1 ) ) D m α ( j = 0 b j ( γ t ) j α Γ ( j α + 1 ) ) .
Using part (ii) of Lemma 1, Equation (9) can be written as
D k α y ( β t ) D m α f ( γ t ) = ( i = 0 β ( i + k ) α a i + k t i α Γ ( i α + 1 ) ) ( j = 0 γ ( j + m ) α b j + m t j α Γ ( j α + 1 ) ) ,
which can be simplified as
D k α y ( β t ) D m α f ( γ t ) = i = 0 j = 0 β ( i + k ) α γ ( j + m ) α a i + k b j + m t ( i + j ) α Γ ( i α + 1 ) Γ ( j α + 1 ) .
Equation (10) can be written as
D k α y ( β t ) D m α f ( γ t ) = n = 0 ( i = 0 n β ( i + k ) α γ ( n i + m ) α a i + k b n i + m Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ) t n α Γ ( n α + 1 ) .
The proof is complete.

3.2. Algorithm of DPSM

To illustrate the new method, we consider the fractional differential equation of the following form:
L [ y ( t ) ] + N [ y ( t ) ] = 0 ,
where L and N denote linear and nonlinear differential operators, respectively.
The methodology of DPSM in solving initial value problems depends on expressing the solution of Equation (11) in the series form:
      y ( t ) = n = 0 a n t n α Γ ( n α + 1 ) .
Now, we must find the coefficients a n ’s of the series expansion in Equation (12). To apply DPSM, we consider some special cases of the linear and nonlinear operators in (11) to handle some types of differential equations. Our goal is to identify general terms of the coefficients a n ’s based on the target equation, noting that Equation (11) may be a system of differential equations. The main idea of DPSM is to replace each term in the given differential equation by a suitable expression from Theorem 3, and then simplify the obtained series expansion and solve the obtained algebraic equation for the coefficients a n s . These replacements can be applied for one or more of differential equations of ordinary or fractional orders. Following that, we obtain a recurrence relation in which we substitute n = 0 , 1 , to obtain the coefficients a 0 , a 1 , of the series solution.
Now, to simplify the procedure of the method, we present the steps of DPSM in the below algorithm, as follows:
Step 1. Do the replacements in Theorem 3—depending on the form of the target equation—and replace each term of the equation with its suitable coefficient a n of t n α Γ ( n α + 1 ) .
Step 2. Define a general form of the solution directly by taking the higher index a n + k to the left-hand side and the others into the right-hand side:
a n + k = f ( a n + k 1 ,   a n + k 2 ,   , a 0 ) .
Step 3. Take the values of n , step by step as n = 0 , and then as n = 1 …, recursively, and find the coefficients of the approximate solution, as you need to obtain good approximations.

4. Numerical Examples

In this section, we illustrate the steps of the new method by presenting examples and solving them with DPSM. We also present figures and tables to illustrate the efficiency, reliability, and simplicity of the new method.
Example 1. 
Consider the linear fractional differential equation
D α y ( t ) = y ( t ) + 1
with the initial condition (IC)
y ( 0 ) = 0 .
Solution. 
To solve this equation by the DPSM, we need to write the fractional power series representation of
f ( t ) = 1 = n = 0 b n t n α Γ ( n α + 1 ) ,  
where b 0 = 1 and b 1 = b 2 = = 0 .
In addition, we assume that
y ( t ) = n = 0 a n t n α Γ ( n α + 1 ) .
From the IC, we obtain a 0 = 0 .
Now, applying the replacements from Theorem 3 to Equation (13), one can obtain
D α y ( t ) a n + 1 , y ( t ) a n ,
and
1 b n
to obtain the recurrence relation
a n + 1 = a n + b n   and   n = 0 , 1 , .
Now, for n = 0 , we obtain
a 1 = a 0 + b 0 = 1 .
For n = 1 , we obtain
a 2 = a 1 + b 1 = 1 .
For n = 2 , we obtain
a 3 = a 2 + b 2 = 1 .
The patterns of the coefficients a n are obvious, and the solution to Equation (13) with the IC (14) is given by
y ( t ) = n = 1 t n α Γ ( n α + 1 ) = E α ( t α ) 1 ,
where E α ( t α ) is the Mittag-Leffler function.
Substituting α = 1 in Equation (13), we obtain the ODE
y ( t ) = y ( t ) + 1 ,
which has the exact solution
y ( t ) = e t 1 ,
which is identical to our result, and placing α = 1 into Equation (17), we obtain the same result, as follows:
1 + n = 1 t n Γ ( n + 1 ) = e t 1 .
Example 2. 
Consider the fractional pantograph equation of the form
D α y ( t ) = a y ( t ) + i = 1 L b i y ( p i t ) + i = 1 m r i D α y ( q i t ) + g ( t ) ,  
where α ( 0 , 1 ] and p i ,   q i ( 0 , 1 ) , g ( t ) and y ( t ) are analytic functions.
Solution. 
Applying the replacements of Theorem 3 on Equation (18), we obtain
D α y ( t ) C n + 1 , y ( t ) C n , y ( p i t ) C n p i n α ,   and D α y ( q i t ) C n + 1 q i ( n + 1 ) α   ,
and we assume that
g ( t ) = n = 0 D n α g ( 0 ) t n α Γ ( n α + 1 )   .
Hence, by replacing g   with   D n α g ( 0 ) in Equation (18), we obtain the recurrence relation by using Theorem 3:
C n + 1 = a C n + i = 1 L b i C n p i n α + i = 1 m r i C n + 1 q i ( n + 1 ) α + D n α g ( 0 ) .
Hence,
C n + 1 = a C n + i = 1 L b i C n p i n α + ( D t n α g ) ( 0 ) 1 i = 1 m r i q i ( n + 1 ) α , n > 0 .
The general solution can be written as
y ( t ) = c 0 + c 1 t α Γ ( α + 1 ) + c 2 t α 2 Γ ( 2 α + 1 ) + ,
which is the same result obtained in [24] using the Laplace residual power series method.
Herein, we see the simplicity of using DPSM to solve fractional differential equations.
Example 3. 
Consider the following non-linear pantograph equation
D t α y ( t ) = 1 2 y ( t ) + 1 2 1 α y ( t 2 ) D t α y ( t 2 ) ,     t 0 ,   a n d     0 < α 1  
subject to the IC
y ( 0 ) = 1 .
Solution. 
To solve Equation (21) by the DPSM, apply the replacements from Theorem 3 on Equation (21), as
D α y ( t ) a n + 1 , y ( t ) a n ,
and
y ( t 2 ) D t α y ( t 2 ) i = 0 n ( 1 2 ) i ( 1 2 ) n i + 1 a i a n i + 1 Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 )
to obtain
a n + 1 = 1 2 a n + 1 2 1 α i = 0 n ( 1 2 ) i ( 1 2 ) n i + 1 a i a n i + 1 Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) = 1 2 n α + 1 1 ( 2 n α a n + i = 1 n a i a n i + 1 Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ) .
Now, we have if:
n = 0 a 1 = 1 ,   we   can   obtain : n = 1 a 2 = 1 2 1 α + 1 1 ( 2 1 α a 1 + i = 1 1 a i a 1 i + 1 Γ ( α + 1 ) Γ ( i α + 1 ) Γ ( ( 1 i ) α + 1 ) ) = 2 α + 1 2 α + 1 1 . n = 2 a 3 = 1 2 2 α + 1 1 ( 2 2 α a 2 + a 1 a 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + a 2 a 1 ) = 2 α + 1 ( 2 2 α + 1 1 ) ( 2 α + 1 1 ) ( 2 2 α + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 1 ) = a 2 2 2 α + 1 1 ( 2 2 α + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 1 ) . n = 3 a 4 = 1 2 3 α + 1 1 ( 2 3 α a 3 + ( a 1 a 3 + a 2 2 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) + a 1 a 3 ) = a 3 2 3 α + 1 1 ( 2 3 α + Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) + 1 ) + a 2 2 2 3 α + 1 1 ( Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) . n = 4 a 5 = a 4 2 4 α + 1 1 ( 2 4 α + 1 + Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) ) + a 2 a 3 2 4 α + 1 1 ( Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) .
The solution of Equation (21) can be expressed as
y ( t ) = a 0 + a 1 t α Γ ( α + 1 ) + a 2 t 2 α Γ ( 2 α + 1 ) + .
The exact solution to the ordinary form of Equation (21) can be obtained by placing α = 1 into (21) to obtain the solution y ( t ) = e t , which is identical to our result.
Example 4 [25]. 
Consider the following system
D α y 1 ( t ) = y 1 ( t ) , D α y 2 ( t ) = 2 ( y 1 ( t ) ) 2 ,   and D α y 3 ( t ) = 3 y 1 ( t ) y 2 ( t ) ,  
and subject it to the ICs y 1 ( 0 ) = 1 ,   y 2 ( 0 ) = 1 , and y 3 ( 0 ) = 0 , where the exact solution to system (23), with α = 1 , is y 1 = e t ,   y 2 = e 2 t , and y 3 = e 3 t 1 .
Solution. 
The solution by DPSM can be obtained by applying the replacements of Theorem 3, to find the general solution directly.
D α y 1 ( t ) a n + 1 , D α y 2 ( t ) b n + 1 , D α y 3 ( t ) c n + 1 , y 1 ( t ) a n , y 1 2 ( t ) i = 0 n a i a n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ,
and
y 1 ( t ) y 2 ( t ) i = 0 n a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) .
Then, we obtain the following recurrence relations for n = 1 , 2 , , as follows:
a n + 1 = a n , b n + 1 = 2 i = 0 n a i a n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ,   and c n + 1 = 3 i = 0 n a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) .
From a n + 1 = a n , we obtain a 0 = a 1 = a 2 = = 1 , and so on, and
y 1 ( t ) = n = 0 t n α Γ ( n α + 1 ) = E α ( t ) ,
where E α ( t ) is the Mittag-Leffler function.
From this,
b n + 1 = 2 i = 0 n a i a n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) = 2 i = 0 n Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) .
Now, we have if:
n = 0 b 1 = 2 . n = 1 b 2 = 2 ( 2 ) = 4 . n = 2 b 3 = 2 ( 2 + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) . n = 3 b 4 = 2 ( 2 + 2 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) ) . n = 4 b 5 = 2 ( 2 + 2 Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) . n = 5 b 6 = 2 ( 2 + 2 Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( α + 1 ) + 2 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) ) .
Hence,
y 2 ( t ) = n = 0 b n t n α Γ ( n α + 1 ) = 1 + 2 t α Γ ( α + 1 ) + 4 t 2 α Γ ( 2 α + 1 ) + 2 ( 2 + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) t 3 α Γ ( 3 α + 1 ) + 2 ( 2 + 2 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) ) t 4 α Γ ( 4 α + 1 ) + 2 ( 2 + 2 Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) t 5 α Γ ( 5 α + 1 ) + 2 ( 2 + 2 Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( α + 1 ) + 2 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) ) t 6 α Γ ( 6 α + 1 ) + .
Finally, from the recurrence relation
c n + 1 = 3 i = 0 n a i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ,
then, we have if:
n = 0 c 1 = 3 . n = 1 c 2 = 3 ( 2 + 1 ) = 9 . n = 2   c 3 = 3 ( 1 + 4 + 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) = 3 ( 5 + 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) . n = 3 c 4 = 3 ( 1 + 2 ( 2 + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) + ( 4 + 2 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) ) = 15 + 6 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 18 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 )   . n = 4 c 5 = 3 ( 1 + 4 + 4 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) + ( 2 + 4 + 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + 4 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) = 15 + 12 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) + 18 Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + 6 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) + 12 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) . n = 5 c 6 = 3 ( 1 + 2 ( 2 + 2 Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) + ( 2 ( 2 + 2 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) ) + 2 ) Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( α + 1 ) + ( 2 ( 2 + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) + 4 ) Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) ) = 15 + 12 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) + 6 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 18 Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( α + 1 ) + 12 Γ ( 3 α + 1 ) Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 24 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) + 6 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ 2 ( α + 1 ) .
Hence,
y 3 ( t ) = n = 0 c n t n α Γ ( n α + 1 ) = 3 t α Γ ( α + 1 ) + 9 t 2 α Γ ( 2 α + 1 ) + ( 15 + 6 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) t 3 α Γ ( 3 α + 1 ) + ( 15 + 6 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 18 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) ) t 4 α Γ ( 4 α + 1 ) + ( 15 + 12 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) + 18 Γ ( 4 α + 1 ) Γ ( 3 α + 1 ) Γ ( α + 1 ) + 6 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) + 12 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) ) t 5 α Γ ( 5 α + 1 ) + ( 15 + 12 Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) Γ ( α + 1 ) + 6 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 18 Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( α + 1 ) + 12 Γ ( 3 α + 1 ) Γ ( 5 α + 1 ) Γ ( 4 α + 1 ) Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 24 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ ( 2 α + 1 ) + 6 Γ ( 5 α + 1 ) Γ ( 3 α + 1 ) Γ 2 ( α + 1 ) ) t 6 α Γ ( 6 α + 1 ) + .
We note that here, for y 1 , we obtain the exact value, but the values of y 2 and y 3 have been found for the sixth approximation, and by using the Mathematica 13 software program, the error is considered in the following table for the twelfth approximation. The CPU time spent to complete Table 1 below was ( 0.0002855 ) s, and to complete Table 2, it was 0.0002676 s, and for Table 3, it was 0.0002676 s.
Moreover, we note that the obtained numerical results are identical to those obtained by the Laplace residual power series method [12].
The following figures illustrate the exact solutions, approximate solutions, and exact errors of y 1 , y 2 , and y 3 . The CPU time spent to complete Figure 1, Figure 2 and Figure 3 respectively, are ( 0.032174 ) and ( 0.0156867 ) s, (0.1422075) and (0.158477) s, and (0.5517424) and (0.7638421) s.
Example 5 [27]. 
Consider the following nonlinear system of fractional differential equations
D α y 1 ( t ) = 1002   y 1 ( t ) + 1000   y 2 2 ( t )   and D α y 2 ( t ) = y 1 ( t ) y 2 ( t ) y 2 2 ( t ) ,
with the ICs y 1 ( 0 ) = 1 and y 2 ( 0 ) = 1 ,   respectively .
The exact solutions to system (24) with α = 1 are y 1 ( t ) = e 2 t   and   y 2 ( t ) = e t .
Solution. 
To solve system (24) using DPSM, apply the replacements from Theorem 3 on Equation (24), to obtain
D α y 1 ( t ) a n + 1 , D α y 2 ( t ) b n + 1 , y 1 ( t ) a n , y 2 ( t ) b n ,
and
y 2 2 ( t ) i = 0 n b i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) ,
where
a n + 1 = 1002 a n + 1000 i = 0 n b i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 )   and b n + 1 = a n b n i = 0 n b i b n i Γ ( n α + 1 ) Γ ( i α + 1 ) Γ ( ( n i ) α + 1 ) .
Now, we have if:
n = 0 a 1 = 1002 ( 1 ) + 1000 ( 1 2 ) = 2 . b 1 = 1 1 ( 1 2 ) = 1 . n = 1 a 2 = 1002 ( 2 ) + 1000 ( 2 ) = 4 . b 2 = 2 + 1 ( 2 ) = 1 , n = 2 a 3 = 1002 ( 4 ) + 1000 ( Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 ) = 2008 + 1000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) . b 3 = 4 1 ( Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 ) = 1 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) . n = 3 a 4 = 1002 ( 2008 + 1000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) + 1000 ( 2 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) = 2014016 1004000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) . b 4 = 2008 + 1000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 1 + Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2 + 2 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) = 2011 + 1003 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) . n = 4 a 5 = 1002 ( 2014016 1004000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) + 1000 ( 4022 + 2006 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 4 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) 2 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) + 2 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) ) = 2022066032 + 1008014000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2008000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) + 1000 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) 2000 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) + 2000 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) . b 5 = 2014016 1004000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ( 2011 + 1003 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) ( 4022 + 2006 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 4 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) 2 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) + 2 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) ) = 2020049 1007009 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2006 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 2 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 )
The approximate solutions to system (24) can be written as:
y 1 = n = 0 a n t n α Γ ( n α + 1 ) = 1 2 t α Γ ( α + 1 ) + 4 t 2 α Γ ( 2 α + 1 ) + ( 2008 + 1000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) t 3 α Γ ( 3 α + 1 ) + ( 2014016 1004000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) t 4 α Γ ( 4 α + 1 ) + ( 2022066032 + 1008014000 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2008000 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) + 1000 Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) 2000 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) + 2000 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) ) t 5 α Γ ( 5 α + 1 ) + . y 2 = n = 0 b n t n α Γ ( n α + 1 ) = 1 t α Γ ( α + 1 ) + t 2 α Γ ( 2 α + 1 ) + ( 1 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) ) t 3 α Γ ( 3 α + 1 ) + ( 2011 + 1003 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) + 2 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) ) t 4 α Γ ( 4 α + 1 ) + ( 2020049 1007009 Γ ( 2 α + 1 ) Γ 2 ( α + 1 ) 2006 Γ ( 3 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 2 ( 2 α + 1 ) + 2 Γ ( 4 α + 1 ) Γ ( α + 1 ) Γ ( 3 α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( 4 α + 1 ) Γ 3 ( α + 1 ) Γ ( 3 α + 1 ) ) t 5 α Γ ( 5 α + 1 ) + .
This problem is introduced in [26] and the authors computed the second approximation only, but using DPSM, the steps are shown completely until the fifth approximation, which is more accurate. The solutions calculated by the Mathematica program were computed to the twelfth approximation, and the errors are considered in the below tables.
Moreover, we note that the numerical results are identical to that obtained by the Laplace residual power series method [12]. The CPU time spent to complete Table 4 below was 0.0004346 s, and to complete Table 5, it was 0.005228 s.
The following figures illustrate the exact solutions, approximate solutions, and the exact errors of y 1 and y 2 . The CPU time that is spent to complete Figure 4 and Figure 5, respectively, are ( 0.1274689 )   and   ( 0.049291 )   s     and   ( 0.1531077 )   and   ( 0.17544980 ) s.
Example 6 [27]. 
Consider the three-dimensional pantograph equations
y 1 ( t ) = 2 y 2 ( t 2 ) + y 3 ( t ) t cos ( t 2 )   , y 2 ( t ) = 1 t sin t 2 y 3 2 ( t 2 ) ,   and y 3 = y 2 ( t ) y 1 ( t ) t cos t  
with the ICs y 1 ( 0 ) = 1 ,   y 2 ( 0 ) = 0 ,   and   y 3 ( 0 ) = 0 ,   respectively .
Solution. 
To solve system (26) by DPSM, first let us write Tayler series of the functions
f ( t ) = t cos t 2   f ( t ) = t + 3 4 t 3 ! 3 5 16 t 5 ! 5 + , g ( t ) = 1 t sin t g ( t ) = 1 2 t 2 2 + 4 t 4 4 6 t 6 6 + ,   and h ( t ) = t cos t h ( t ) = t + 3 t 3 3 ! 5 t 5 5 ! + .
Then,
f 0 = f 2 = f 4 = f 6 = f 8 = 0 , f 1 = 1 ,   f 3 = 3 4 ,   f 5 = 5 16 ,   f 7 = 7 64 ,   f 9 = 9 256 , g 0 = 1 ,   g 2 = 2 ,   g 4 = 4 ,   g 6 = 6 ,   g 8 = 8   , g 1 = g 3 = g 5 = g 7 = g 9 = 0 ,
and
h 0 = h 2 = h 4 = h 6 = h 8 = 0 , h 1 = 1 ,   h 3 = 3 ,   h 5 = 5 ,   h 7 = 7 ,   h 9 = 9 .
Using DPSM, the solution is obtained after applying the replacements in Theorem 3, as follows:
y 1 ( t ) a n + 1 , y 2 ( t ) b n + 1 , y 3 ( t ) c n + 1 , y 1 ( t ) a n , y 2 ( t ) b n , y 3 ( t ) c n , y 2 ( t 2 ) ( 1 2 ) n b n ,
and
y 3 2 ( t 2 ) ( 1 2 ) n i = 0 n c i c n i n ! i ! ( n i ) ! .
where,
a n + 1 = 2 ( 1 2 ) n b n + c n + f n . b n + 1 = g n 2 ( 1 2 ) n i = 0 n c i c n i n ! i ! ( n i ) ! . c n + 1 = b n a n + h n .
Now,
For n = 0 ,
a 1 = 0 + 0 + 0 = 0 .
b 1 = 1 0 = 1 .
c 1 = 0 ( 1 ) + 0 = 1 .
For n = 1 ,
a 2 = 2 ( 1 2 ) 1 1 + 1 1 = 1 .
b 2 = 0 0 = 0 .
c 2 = 1 0 1 = 0 .
For n = 2 ,
a 3 = 0 + 0 + 0 = 0 .
b 3 = 2 2 ( 1 2 ) 2 ( 1 ( 2 ! ) 1 ! 1 ! ) = 3 .
c 3 = 0 1 + 0 = 1 .
For n = 3 ,
a 4 = 2 ( 1 2 ) 3 ( 3 ) + ( 1 ) + 3 4 = 1 .
b 4 = 0 0 = 0 .
c 4 = 3 0 + 3 = 0 .
For n = 4 ,
a 5 = 0 + 0 + 0 = 0 .
b 5 = 4 2 ( 1 2 ) 4 ( 2 ( 1 ) ( 4 ! ) 1 ! 3 ! ) = 5 .
c 5 = 0 ( 1 ) + 0 = 1 .
For n = 5 ,
a 6 = 2 ( 1 2 ) 5 ( 5 ) + 1 5 16 = 1 .  
b 6 = 0 0 = 0 .
c 6 = 5 0 5 = 0 .
For n = 6 ,
a 7 = 0 + 0 + 0 = 0 .
b 7 = 6 2 ( 1 2 ) 6 ( ( 2 ) ( 1 ) 1 ! 5 ! + ( 1 ) 2 3 ! 3 ! ) 6 ! = 7 .
c 7 = 0 1 + 0 = 1 .
For n = 7 ,
a 8 = ( 1 2 ) 6 ( 7 ) + 1 + 7 64 = 1 .
b 8 = 0 ( 1 2 ) 6 ( 0 ) = 0 .
c 8 = 7 0 + 7 = 0 .
For n = 8 ,
a 9 = ( 1 2 ) 7 ( 0 ) + 0 + 0 = 0 .
b 9 = 8 ( 1 2 ) 7 ( 2 ( 1 ) ( 1 ) 8 ! 1 ! 7 ! + 2 ( 1 ) ( 1 ) 8 ! 3 ! 5 ! ) = 9 .
c 9 = 0 ( 1 ) + 0 = 1 .
For n = 9 ,
a 10 = ( 1 2 ) 8 ( 9 ) + 1 9 256 = 1 .
b 10 = 0 ( 1 2 ) 8 ( 0 ) = 0 .
c 10 = 9 0 9 = 0 .
Then,
y 1 , 10 ( t ) = n = 0 10 a n t n n ! = 1 + t 2 2 ! t 4 4 ! + t 6 6 ! t 8 8 ! + t 10 10 ! . y 2 , 10 ( t ) = n = 0 10 b n t n n ! = t 3 t 3 3 ! + 5 t 5 5 ! 7 t 7 7 ! + 9 t 9 9 ! . y 3 , 10 ( t ) = n = 0 10 c n t n n ! = t t 3 3 ! + t 5 5 ! t 7 7 ! + t 9 9 ! .
In the following tables, Table 6, Table 7 and Table 8, we present the tenth approximate solutions of y 1 , y 2 and y 3 respectively by the DPSM.
In addition, the errors and relative errors are illustrated in the tables up to the tenth approximations. The CPU time spent to complete Table 6 was 0.0330754 s, and to complete Table 7, it was 0.0001189 s, and for Table 8, it was 0.0001083   s .
The following figures (Figure 6, Figure 7 and Figure 8) show graphs of the approximate solutions using DPSM, with different values of k (approximate levels), for y 1 ,   y 2 , and y 3 , respectively. The CPU time spent to complete Figure 6, Figure 7 and Figure 8, respectively, was 0.0351227, 0.0348473 ,   and   0.0352323 s.

5. Remarks and Discussion

In this section, we introduce remarks and discussion about the presented method.
  • The new technique, DPSM, depends on obtaining numerical series solutions to differential equations using the idea of the power series method.
  • The method is applicable and efficient in finding approximate solutions to differential equations of different types.
  • There is no need to compute limits or derivatives through the applications.
  • The method can be easily programmed by computer software.
  • DPSM saves time and effort for researchers in obtaining analytical solutions.
  • DPSM requires no linearization or discretization to apply, which is different from other analytical methods.
  • We can obtain many terms of the approximate solution using the general term that we obtain using DPSM for the target equation.
  • Unlike other analytical methods, DPSM can guarantee finding a general term of the series solution.
  • All the numerical results in this article were obtained using the Mathematica 13 software.
  • DPSM provides the same results as those obtained by other power series methods, which guarantees its reliability, but it is faster and simpler than other methods.

6. Conclusions

In different types of sciences, such as mathematics, physics, engineering statics, etc., there are large numbers of equations and systems that need to be solved. Mathematicians have created and developed many analytical and numerical methods to obtain the right answer exactly or approximately. Power series methods, such as RPSM, the Laplace residual power series method, and many others, provide precise solutions, and sometimes, they provide the exact solution. However, these methods require many steps each time, especially when solving nonlinear equations and systems. This article introduces a new method, called DPSM, that presents series solutions to some types of problems directly and without long computations. This method is simple compared to other methods and it provides more accurate results than other power series methods because it can generate many terms of the series solution that could be easily programmed to be found more quickly by computer software (Mathematica). DPSM is a simple method that calculates the coefficients of the power series expansion directly, without hard calculations. The obtained results demonstrate the applicability and efficiency of the proposed method. In the future, we will pay attention to obtaining new formulas to solve linear and nonlinear PDEs of fractional orders.

Author Contributions

Data curation, R.S., R.H., E.S. and A.Q.; formal analysis, E.S., R.S., A.Q. and R.H.; investigation, A.Q., E.S., R.S. and. R.H.; methodology, E.S., R.S., A.Q. and R.H.; project administration, A.Q., R.S., R.H. and E.S.; resources, E.S., R.S., R.H. and A.Q.; writing—original draft, A.Q., R.S., R.H. and E.S.; writing—review and editing, A.Q., E.S., R.S. and R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the dear referees, who wish to remain anonymous, and the editor for their helpful suggestions, which improved the final version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Almeida, R.; Torres, D.F. Necessary and sufficient conditions for the fractional calculus of variations with Caputo derivatives. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 1490–1500. [Google Scholar] [CrossRef] [Green Version]
  2. Podlubny, I. Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  3. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity; Imperial College Press: London, UK, 2010. [Google Scholar]
  4. Kumar, S.; Kumar, R.; Osman, M.S.; Samet, B.A. Wavelet based numerical scheme for fractional order SEIR epidemic of measles by using Genocchi polynomials. Numer. Methods Partial. Differ. Equ. 2021, 37, 1250–1268. [Google Scholar] [CrossRef]
  5. Matlob, M.A.; Jamali, Y. The concepts and applications of fractional order differential calculus in modeling of viscoelastic systems: A primer. Crit. Rev. Biomed. Eng. 2019, 47, 249–276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Kumar, P.; Qureshi, S. Laplace-Carson integral transform for exact solutions of non-integer order initial value problems with Caputo operator. J. Appl. Math. Comput. Mech. 2020, 19, 57–66. [Google Scholar] [CrossRef]
  7. D’Elia, M.; Du, Q.; Glusa, C.; Gunzburger, M.; Tian, X.; Zhou, Z. Numerical methods for nonlocal and fractional models. Acta Numer. 2020, 29, 1–124. [Google Scholar] [CrossRef]
  8. Kilbas, A.; Srivasfava, H.; Trujillo, J. Theory and Applications of Fractional Differential Equation. In North-Holland Mathematics Studies; Elsevier Science BV: Amsterdam, The Netherlands, 2006. [Google Scholar]
  9. Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods; World Scientific: Singapore, 2012; Volume 3. [Google Scholar]
  10. Zhang, Y.; Kumar, A.; Kumar, S.; Baleanu, D.; Yang, X.J. Residual power series method for time-fractional Schrödinger equations. J. Nonlinear Sci. Appl. 2016, 9, 5821–5829. [Google Scholar] [CrossRef]
  11. Burqan, A.; Saadeh, R.; Qazza, A. ARA-Residual Power Series Method for Solving Partial Fractional Differential Equations. Alex. Eng. J. 2022, 62, 47–62. [Google Scholar] [CrossRef]
  12. Qazza, A.; Burqan, A.; Saadeh, R. Application of ARA Residual Power Series Method in Solving Systems of Fractional Differential Equations. Math. Probl. Eng. 2022, 2022, 6939045. [Google Scholar] [CrossRef]
  13. Kumar, S.; Kumar, A.; Baleanu, D. Two analytical methods for time-fractional nonlinear coupled Boussinesq–Burger’s equations arise in propagation of shallow water waves. Nonlinear Dyn. 2016, 85, 699–715. [Google Scholar] [CrossRef]
  14. Kumar, A.; Kumar, S.; Singh, M. Residual power series method for fractional Sharma-Tasso-Olever equation. Commun. Numer. Anal. 2016, 10, 1–10. [Google Scholar] [CrossRef]
  15. El-Ajou, A.; Arqub, O.A.; Zhour, Z.A.; Momani, S. New results on fractional power series: Theories and applications. Entropy 2013, 15, 5305–5323. [Google Scholar] [CrossRef]
  16. Shokooh, A.; Suárez, L. A comparison of numerical methods applied to a fractional model of damping materials. J. Vib. Control 1999, 5, 331–354. [Google Scholar] [CrossRef]
  17. Alquran, M.; Ali, M.; Alsukhour, M.; Jaradat, I. Promoted residual power series technique with Laplace transform to solve some time-fractional problems arising in physics. Results Phys. 2020, 19, 103667. [Google Scholar] [CrossRef]
  18. Wu, G.C.; Baleanu, D. Variational iteration method for fractional calculus—A universal approach by Laplace transform. Adv. Differ. Equ. 2013, 1, 18. [Google Scholar] [CrossRef] [Green Version]
  19. Thongmoon, M.; Pusjuso, S. The numerical solutions of differential transform method and the Laplace transform method for a system of differential equations. Nonlinear Anal. Hybrid Syst. 2010, 4, 425–431. [Google Scholar] [CrossRef]
  20. Diethelm, K.; Ford, N.J.; Freed, A.D.; Luchko, Y. Algorithms for the fractional calculus: A selection of numerical methods. Comput. Methods Appl. Mech. Eng. 2005, 194, 743–773. [Google Scholar] [CrossRef] [Green Version]
  21. Ezz-Eldien, S.S. On solving systems of multi-pantograph equations via spectral tau method. Appl. Math. Comput. 2018, 321, 63–73. [Google Scholar] [CrossRef]
  22. Abu-Gdairi, R. Analytical Solution of Nonlinear Fractional Gradient-Based System Using Fractional Power Series Method. Int. J. Anal. Appl. 2022, 20, 51. [Google Scholar] [CrossRef]
  23. Ahmed, S.A.; Qazza, A.; Saadeh, R. Exact Solutions of Nonlinear Partial Differential Equations via the New Double Integral Transform Combined with Iterative Method. Axioms 2022, 11, 247. [Google Scholar] [CrossRef]
  24. Eriqat, T.; El-Ajou, A.; Moa’ath, N.O.; Al-Zhour, Z.; Momani, S. A new attractive analytic approach for solutions of linear and nonlinear neutral fractional pantograph equations. Chaos Solitons Fractals 2020, 138, 109957. [Google Scholar] [CrossRef]
  25. Alquran, M.; Alsukhour, M.; Ali, M.; Jaradat, I. Combination of Laplace transform and residual power series techniques to solve autonomous n-dimensional fractional nonlinear systems. Nonlinear Eng. 2021, 10, 282–292. [Google Scholar] [CrossRef]
  26. Rani, A.; Saeed, M.; Ul-Hassan, Q.M.; Ashraf, M.; Khan, M.Y.; Ayub, K. Solving system of differential equations of fractional order by homotopy analysis method. J. Sci. Arts 2017, 17, 457–468. [Google Scholar]
  27. Komashynska, I.; Al-Smadi, M.; Al-Habahbeh, A.; Ateiwi, A. Analytical Approximate Solutions of Systems of Multi-pantograph Delay Differential Equations Using Residual Power-series Method. Aust. J. Basic Appl. Sci. 2014, 8, 664–675. [Google Scholar]
  28. Nieto, J.J. Solution of a fractional logistic ordinary differential equation. Appl. Math. Lett. 2022, 123, 107568. [Google Scholar] [CrossRef]
  29. Hattaf, K. On the stability and numerical scheme of fractional differential equations with application to biology. Computation 2022, 10, 97. [Google Scholar] [CrossRef]
  30. Agarwal, R.P.; Benchohra, M.; Hamani, S. Boundary value problems for fractional differential equations. Georgian Math. J. 2009, 16, 401–411. [Google Scholar] [CrossRef]
  31. Weilbeer, M. Efficient Numerical Methods for Fractional Differential Equations and Their Analytical Background. Ph.D. Thesis, Technical University of Braunschweig, Braunschweig, Germany, 2006. [Google Scholar]
  32. Ismail, G.; Abdl-Rahim, H.; Ahmad, H.; Chu, Y. Fractional residual power series method for the analytical and approximate studies of fractional physical phenomena. Open Phys. 2020, 18, 799–805. [Google Scholar] [CrossRef]
  33. Chen, S.B.; Soradi-Zeid, S.; Alipour, M.; Chu, Y.M.; Gomez-Aguilar, J.F.; Jahanshahi, H. Optimal control of nonlinear time-delay fractional differential equations with Dickson polynomials. Fractals 2021, 29, 2150079. [Google Scholar] [CrossRef]
  34. Chu, Y.M.; Hani, E.H.B.; El-Zahar, E.R.; Ebaid, A.; Shah, N.A. Combination of Shehu decomposition and variational iteration transform methods for solving fractional third order dispersive partial differential equations. Numer. Methods Partial Differ. Equ. 2021. [Google Scholar] [CrossRef]
Figure 1. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 1 for Example 4.
Figure 1. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 1 for Example 4.
Axioms 12 00111 g001
Figure 2. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 2 for Example 4.
Figure 2. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 2 for Example 4.
Axioms 12 00111 g002
Figure 3. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 3 for Example 4.
Figure 3. Profile of (a) exact and approximate solutions (b) absolute errors regarding y 3 for Example 4.
Axioms 12 00111 g003
Figure 4. Profile of exact and approximate solutions and absolute errors regarding y 1 for Example 5.
Figure 4. Profile of exact and approximate solutions and absolute errors regarding y 1 for Example 5.
Axioms 12 00111 g004
Figure 5. Profile of exact and approximate solutions and absolute errors regarding y 2 for Example 5.
Figure 5. Profile of exact and approximate solutions and absolute errors regarding y 2 for Example 5.
Axioms 12 00111 g005
Figure 6. Plots of the exact solution for y 1 ( t ) and the solution using DPSM for y ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 1 ( t ) and y 1 , k ( t ) are represented, respectively, by straight and dashed lines.
Figure 6. Plots of the exact solution for y 1 ( t ) and the solution using DPSM for y ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 1 ( t ) and y 1 , k ( t ) are represented, respectively, by straight and dashed lines.
Axioms 12 00111 g006
Figure 7. Plots of the exact solution for y 2 ( t ) and the solution using DPSM for y 2 , k ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 2 ( t ) and y 2 , k ( t ) are represented, respectively, by straight and dashed lines.
Figure 7. Plots of the exact solution for y 2 ( t ) and the solution using DPSM for y 2 , k ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 2 ( t ) and y 2 , k ( t ) are represented, respectively, by straight and dashed lines.
Axioms 12 00111 g007
Figure 8. Plots of the exact solution for y 3 ( t ) and the solution using DPSM for y 3 , k ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 3 ( t ) and y 3 , k ( t ) are represented, respectively, by straight and dashed lines.
Figure 8. Plots of the exact solution for y 3 ( t ) and the solution using DPSM for y 3 , k ( t ) , with k = 2 , 4 , 6 , 8 , 10 for Example 6 on [0,6], where y 3 ( t ) and y 3 , k ( t ) are represented, respectively, by straight and dashed lines.
Axioms 12 00111 g008
Table 1. Error analysis of y 1 ( t ) for Example 4 on [ 0 , 1 ] .
Table 1. Error analysis of y 1 ( t ) for Example 4 on [ 0 , 1 ] .
  t Exact SolutionSolution by DPSM | y 1 ( t ) y 1 , 12 ( t ) |
0.16 1.17351 1.17351 4.44089 × 10 16
0.32 1.37713 1.37713 2.22045 × 10 16
0.48 1.61607 1.61607 1.22125 × 10 14
0.64 1.89648 1.89648 5.0826 × 10 13
0.8 2.22554 2.22554 9.3614 × 10 12
0.96 2.6117 2.6117 1.01377 × 10 10
Table 2. Error analysis of y 2 ( t ) for Example 4 on [ 0 , 1 ] .
Table 2. Error analysis of y 2 ( t ) for Example 4 on [ 0 , 1 ] .
t Exact SolutionSolution by DPSM | y 2 ( t ) y 2 , 12 ( t ) |
0.16 1.37713 1.37713 2.22045 × 10 16
0.32 1.89648 1.89648 5.0826 × 10 13
0.48 2.6117 2.6117 1.01377 × 10 10
0.64 3.59664 3.59664 4.37325 × 10 9
0.8 4.95303 4.95303 8.1568 × 10 8
0.96 6.82096 6.82096 8.95355 × 10 7
Table 3. Error analysis of y 3 ( t ) for Example 4 on [ 0 , 1 ] .
Table 3. Error analysis of y 3 ( t ) for Example 4 on [ 0 , 1 ] .
t Exact SolutionSolution by DPSM | y 3 ( t ) y 3 , 12 ( t ) |
0.16 0.616074 0.616074 1.21014 × 10 14
0.32 1.6117 1.6117 1.01378 × 10 10
0.48 3.2207 3.2207 2.04739 × 10 8
0.64 5.82096 5.82096 8.95355 × 10 7
0.8 10.0232 10.0232 1.69419 × 10 5
0.96 16.8143 16.8143 1.88814 × 10 4
Table 4. Error analysis of y 1 ( t ) for Example 5 on [ 0 , 1 ] .
Table 4. Error analysis of y 1 ( t ) for Example 5 on [ 0 , 1 ] .
Exact SolutionSolution by DPSM | y 1 ( t ) y 1 , 12 ( t ) |
0.16 0.726149 0.726149 0
0.32 0.527292 0.527292 4.63962 × 10 13
0.48 0.382893 0.382893 8.83733 × 10 11
0.64 0.278037 0.278037 3.64122 × 10 9
0.8 0.201897 0.201897 6.48591 × 10 8
0.96 0.146607 0.146607 6.79809 × 10 7
Table 5. Error analysis of y 2 ( t ) for Example 5 on [ 0 , 1 ] .
Table 5. Error analysis of y 2 ( t ) for Example 5 on [ 0 , 1 ] .
Exact SolutionSolution by DPSM | y 2 ( t ) y 2 , 12 ( t ) |
0.16 0.852114 0.852114 2.22045 × 10 16
0.32 0.726149 0.726149 0
0.48 0.618783 0.618783 1.13243 × 10 14
0.64 0.527292 0.527292 4.63962 × 10 13
0.8 0.449329 0.449329 8.8.34971 × 10 12
0.96 0.382893 0.382893 8.83733 × 10 11
Table 6. Error analysis of y 1 ( t ) for Example 6 on [ 0 , 1 ] .
Table 6. Error analysis of y 1 ( t ) for Example 6 on [ 0 , 1 ] .
Exact SolutionSolution by DPSM | y 1 ( t ) y 1 , 10 ( t ) | | y 1 ( t ) y 1 , 10 ( t ) y 1 ( t ) |
0.2 0.980067 0.980067 0 0
0.4 0.921061 0.921061 3.5083 × 10 14 3.80898 × 10 14
0.6 0.825336 0.825336 4.53548 × 10 12 5.49532 × 10 12
0.8 0.696707 0.696707 1.42961 × 10 10 2.05195 × 10 10
1 0.540302 0.540302 2.07625 × 10 9 3.84276 × 10 9
Table 7. Error analysis of y 2 ( t ) for Example 6 on [ 0 , 1 ] .
Table 7. Error analysis of y 2 ( t ) for Example 6 on [ 0 , 1 ] .
Exact SolutionSolution by DPSM | y 2 ( t ) y 2 , 10 ( t ) | | y 2 ( t ) y 2 , 10 ( t ) y 2 ( t ) |
0.2 0.196013 0.196013 5.66214 × 10 15 2.88865 × 10 14
0.4 0.368424 0.368424 1.15443 × 10 11 3.13343 × 10 11
0.6 0.495201 0.495201 9.9705 × 10 10 2.01342 × 10 9
0.8 0.557365 0.557365 2.35572 × 10 8 4.22653 × 10 8
1 0.540302 0.540302 2.73497 × 10 7 5.06192 × 10 7
Table 8. Error analysis of y 3 ( t ) for Example 6 on [ 0 , 1 ] .
Table 8. Error analysis of y 3 ( t ) for Example 6 on [ 0 , 1 ] .
Exact SolutionSolution by DPSM | y 3 ( t ) y 3 , 10 ( t ) | | y 3 ( t ) y 3 , 10 ( t ) y 3 ( t ) |
0.2 0.198669 0.198669 5.55112 × 10 16 2.79415 × 10 15
0.4 0.389418 0.389418 1.04966 × 10 12 2.69546 × 10 12
0.6 0.564642 0.564642 9.06788 × 10 11 1.60595 × 10 10
0.8 0.717356 0.717356 2.14316 × 10 9 2.98758 × 10 9
1 0.841471 0.841471 2.48923 × 10 8 2.95819 × 10 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salah, E.; Saadeh, R.; Qazza, A.; Hatamleh, R. Direct Power Series Approach for Solving Nonlinear Initial Value Problems. Axioms 2023, 12, 111. https://doi.org/10.3390/axioms12020111

AMA Style

Salah E, Saadeh R, Qazza A, Hatamleh R. Direct Power Series Approach for Solving Nonlinear Initial Value Problems. Axioms. 2023; 12(2):111. https://doi.org/10.3390/axioms12020111

Chicago/Turabian Style

Salah, Emad, Rania Saadeh, Ahmad Qazza, and Raed Hatamleh. 2023. "Direct Power Series Approach for Solving Nonlinear Initial Value Problems" Axioms 12, no. 2: 111. https://doi.org/10.3390/axioms12020111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop