Next Article in Journal
Resolution Dimension Relative to Resolving Subcategories in Extriangulated Categories
Next Article in Special Issue
General Fractional Calculus: Multi-Kernel Approach
Previous Article in Journal
Employing Fuzzy Logic to Analyze the Structure of Complex Biological and Epidemic Spreading Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Convergent Collocation Approach for Generalized Fractional Integro-Differential Equations Using Jacobi Poly-Fractonomials

1
Department of Mathematical Sciences, Indian Institute of Technology (BHU) Varanasi, Varanasi 221005, Uttar Pradesh, India
2
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8W 3R4, Canada
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematics and Informatics, Azerbaijan University, 71 Jeyhun Hajibeyli Street, Baku AZ1007, Azerbaijan
5
Section of Mathematics, International Telematic University Uninettuno, I-00186 Rome, Italy
6
Department of Applied Mathematics, Indian Institute of Technology (Indian School of Mines), Dhanbad 826004, Jharkhand, India
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(9), 979; https://doi.org/10.3390/math9090979
Submission received: 16 March 2021 / Revised: 14 April 2021 / Accepted: 23 April 2021 / Published: 27 April 2021

Abstract

:
In this paper, we present a convergent collocation method with which to find the numerical solution of a generalized fractional integro-differential equation (GFIDE). The presented approach is based on the collocation method using Jacobi poly-fractonomials. The GFIDE is defined in terms of the B-operator introduced recently, and it reduces to Caputo fractional derivative and other fractional derivatives in special cases. The convergence and error analysis of the proposed method are also established. Linear and nonlinear cases of the considered GFIDEs are numerically solved and simulation results are presented to validate the theoretical results.

1. Introduction

Fractional calculus is the branch of applied mathematics in which we deal with integration and differentiation of arbitrary order [1,2,3,4,5,6,7]. Fractional-order derivatives and integrations arise in the modeling of several problems in various domains including physics, engineering, and biology. In the last few decades, applications of fractional differential equations (FDEs) have increased very rapidly in different fields like bioengineering [8], fluid dynamics [9], electrochemistry [10], electromagnetism [11], Control theory [12], and viscoelasticity [13]. Introductory overview of fractional order derivatives and recent developments are presented in [1]. In recent years, fractional integro-differential equations (FIDEs) have been investigated to represent the physical phenomena in various fields such as electromagnetics [14]. Several researchers have been concentrating on the development of numerical and analytical techniques for FIDEs. For example, Angell and Olmstead [15] solved integro-differential equation modeling filament stretching using by Singular perturbation method. Khosro et al. [16] presented a numerical technique based on Bernstein’s operational matrix for a family of FIDEs utilizing the trapezoidal rule. Kilbas et al. [17] developed some basic concepts for solving FIDEs, and also provided the existence and uniqueness theorem. In [18], a new set of functions was constructed to obtain the numerical solution of FIDEs, called the fractional-order Euler function, which is based on the Euler function. By using the property of the fractional-order Euler function, the authors found the approximate solution using the operational matrix approach, and also discussed the convergence analysis of the problem. Saddatmandi and Dehghan [19] developed a numerical method for solving linear and nonlinear FIDEs by defining the fractional derivative in the Caputo sense. The approximate solution was found by Legendre approximations. The property of Legendre polynomials together with the Gaussian integration method were utilized to convert the system of algebraic equations. In [20], two numerical approximations for the generalized Abel integral were proposed, and a generalized Abel integral equation was solved by two numerical schemes, i.e., linear and quadratic. The error and convergence were also discussed. It was found that the quadratic scheme achieved a convergence order up to three. Bonilla et al. [21] dealt with the linear system of the fractional differential equation defined in the form of the Riemann-Liouville or Caputo fractional derivative. In [22,23], Adomian’s decomposition method for solving the system of the nonlinear FIDEs was discussed. In [24,25,26,27,28,29], the authors used the collocation method for solving the fractional differential equations. Odibat [30] presented the analytical study on a linear system of fractional differential equations with constant coefficient and briefly described the issue of the existence and uniqueness of systems of fractional order differential equations. There are many more methods to solve FIDEs, such as the Finite difference method and Finite-element methods [31]. Kamal et al. [32] applied the spectral Tau method to solve general fractional-order differential equations. In [33], a system of fractional differential equations was solved by the homotopy analysis method. Some recent work in this field was done by Hassani et al. [34]. The authors proposed a method for solving a system of nonlinear fractional-order partial differential equations with initial conditions. First, they expanded the solution by using the operational matrix method, and then the unknown coefficients were evaluated by the optimization technique.
Here, we consider the GFIDEs in terms of the B-operator [35]. The B-operator reduces to Caputo derivative and Riemann Liouville derivative for a particular choice of kernel in B-operator. The GFIDEs are solved using the collocation method with Jacobi poly-fractonomials. The details of the Jacobi poly-fractonomials are presented in [36]. It has been noted that Jacobi poly-fractonomials, as basis functions, have an edge over the other known standard polynomials, as the method achieves exponential convergence in approximating fractional polynomial functions. The GFIDEs converted into a system of algebraic equations and, by solving them we get the approximate solution of the GFIDEs. The main aim of this study is to investigate an approximate method with higher accuracy in finding the approximate solution of defined GFIDEs, which is close to the exact solution. Section 1 provides the basic definitions of the operator as given in [35], including some basic properties of Jacobi poly-fractonomials [36].
The rest of the paper is organized as follows: Section 2 provides the procedure to apply Collocation methods for solving GFIDEs. Section 3 provides a convergence analysis of the method. In Section 4, we present an error analysis in two parts, by taking linear and nonlinear cases of the GFIDEs. Section 5 presents five numerical examples that illustrate the accuracy of the proposed method. Finally in Section 6, a conclusion is presented.

1.1. Generalized Fractional Integro-Differential Equations

We first define GFIDEs in terms of K and A/B-operators. These operators [35] are defined as follows:
K α P f ( x ) = r a x w α ( x , t ) f ( t ) d t + s x b w α ( t , x ) f ( t ) d t   , α > 0 ,
where x [ a , b ]   ,   P = a , x , b , r , s denote the all parameters, w α ( x , t ) is a kernel defined on the space I × I . we assume that w α ( x , t ) and f(t) both are square integrable function such that Equation (1) exists. K operator satisfies the linearity properties, i.e., for any two functions f 1 ( t ) and f 2 ( t ) , then
K P α   ( f 1 ( x ) + f ( x ) ) = K α P f 1 ( x ) + K α P f 2 ( x )   .
Define A and B-operators [35] as follows,
A P α f ( x ) = D n K n α P f ( x ) ,
B P α f ( x ) = K n α P D n f ( x )
By using Equation (4), Equation (1) can be written as,
B P α f ( x ) = r 0 x ω m α ( x , t ) D n f ( t ) d t + s x 1 ω m α ( t , x ) D n f ( t ) d t ,   α > 0 ,
where m 1 < α < m , m is an integer and P = a , x , b , r , s and D n f ( t ) denotes the nth derivative of the function f(t). In the definition of B -operator, we assume that D n f ( x ) is integrable once on domain I. Details about all this operator can be found in [35].
Now, we will define GFIDEs using B-operator as follows.
( B P α y ) ( x ) =   ( H   y ) ( x ) ,   0 < α < 1 ,
y ( 0 ) = y 0 .
where
( H   y ) ( x ) = ϕ ( x ) + g ( x ) y ( x ) + 0 x ρ ( x , t ) G ( y ( t ) ) d t ,   1 > α > 0 ,         x I = [ 0 , 1 ] ,
( B P α y ) ( x ) = r 0 x ω m α ( x , t ) D n y ( t ) d t + s x 1 ω m α ( t , x ) D n y ( t ) d t , α > 0
where functions ϕ ( x ) and g(x) are square integrable functions in I with, g ( x ) 0 , and y(x) is unknown. This problem is considered in the interval [ 0 , 1 ] and kernel ρ ( x , t ) is weakly singular of the form
ρ ( x , t ) = ( x t ) v 0 < v < 1
Also, in our definition of B -operator, there is another kernel, ω α ( x , t ) . We have taken kernel ω α ( x , t ) L 2 ( I × I ) , and G defined by Equation (7) can be either a linear or a nonlinear operator.
We assume that Equations (5) and (6) have a unique solution for all real values of r and s, either r = 0 for all s or s = 0 for all r, and one special case when r = 1 and s = 0 The objective is to find the numerical method to solve GFIDEs, as given by Equations (5) and (6).

1.2. Preliminaries. Jacobi Poly-Fractonomials and Function Approximation

1.2.1. Definition and Properties of Shifted Jacobi Poly-Fractonomials

Jacobi poly-fractonomials are the eigenfunctions of the Fractional Sturm-Liouville eigen-problems of first kind, which are defined by
P n a ( x ) = ( 1 + x ) a J n a , a ( x )   , x [ 1 , 1 ] ,   n = 0 , 1 , 2 ,
where J n a , a denotes the standard Jacobi polynomial, and n is the degree. J n a , a Forms Hilbert space in   L w 2 [ 1 , 1 ] , with respect to the weight function w ( t ) = ( 1 t ) a   ( 1 + t ) a , and satisfies orthogonal property with respect to the weight function w(x), i.e.,
1 1 J m a b ( x ) J n a , b ( x ) w ( x ) d x = Y n a , b δ n m ,
where δ n m is the Kronecker function, and Y n a , b = 2 a + b + 1 Γ ( n + a + 1 ) Γ ( n + b + 1 )   ( 2 n + a + b + 1 ) Γ ( n + 1 ) Γ ( n + a + b + 1 ) , is an orthogonally constant.
J n a , b ( x ) = Γ ( n + a + 1 ) Γ ( n + 1 ) Γ ( n + a + b + 1 ) r = 0 n ( n r )   Γ ( a + b + n + r + 1 )   Γ ( n a + 1 ) Γ ( n + 1 )   Γ ( r + a + 1 ) ( x 1 ) 2 r r , is Jacobi polynomial of degree n and ( n r ) is binomial coefficient defined as, ( n r ) = n ( n 1 ) ( n 2 ) ( n r + 1 ) r   ! and Γ denotes the Euler’s gamma function.
It has been proved in [36] that Jacobi poly-fractonomials   P n a ( x ) are orthogonal with respect to the weight function, w ( x ) = ( 1 x ) a   ( 1 + x ) a , and form Hilbert space in L w 2 [ 1 , 1 ] . We use the transformation, x = ( 2 x T 1 )   which transforms the standard interval [ 1 , 1 ] to [ 0 ,   T ] . Then we obtain the corresponding shifted Jacobi poly-fractonomials of the first kind
  P ˜ n a ( x )   = ( 2 T ) a x a J n a , a ( 2 x T 1 ) ,   x [ 0 , T ] , n = 0 , 1 , 2 , 3
For interval [ 0 , 1 ] , Equation (12) takes the form,
  P ˜ n a ( x ) = ( 2 ) a x a J n a , a ( 2 x 1 ) ,   x [ 0 , 1 ] ,   n = 0 , 1 , 2 , 3
P ˜ n a ( x ) can be written in series combination using the definition of Jacobi polynomial,
P ˜ n a ( x ) = 2 a   Γ ( n a + 1 )   Γ ( n + 1 ) r = 0 n ( n r )   Γ ( n + r + 1 )   Γ ( r a + 1 ) x a ( x 1 ) r .
Corresponding to weight function, w ( x ) ˜   = ( 1 x ) a   ( x ) a , we get the orthogonal property
1 1   P n a ( x ) P m a ( x ) w ( x ) d x = Y n a , a δ n m ,
1 1 P n a ( x ) P m a ( x ) w ( x ) d x = 2 2 a + 1 0 1 P ˜ n a ( x ) P ˜ m a ( x )   w ( x )   ˜ d x .
Let X = L 2 ( I ) be the square integrable over interval I , and for g 1 ,   g 2 X , the inner product is defined by
g 1 | g 2 = 0 1 g 1 ( t ) g 2 ( t ) d t ,  
and corresponding norm is defined as follows,
g 2 = ( 0 1 | g ( t ) | 2 d t ) 1 / 2 .

1.2.2. Function Approximation Using Jacobi Poly-Fractonomials

Any function g ( x ) L 2 ( I ) can be written as,
g ( x ) = k = 0 c k     P ˜ k a ( x ) ,
In practice, if we consider only the first ( R + 1 ) terms of   P ˜ k a ( x ) . Then
g ( x ) = g R ( x ) = k = 0 R c k     P ˜ k a ( x ) ,
where c k   and   P ˜ k a ( x ) are given by
c k = [ c 1 , c 2 , c R ] T ,
P ˜ k a ( x ) = [ P ˜ 0 a ( x ) ,   P ˜ 1 a ( x ) , ,   P ˜ R a ( x ) ] T .
Theorem 1
[2].Let g ( t ) be a real, sufficiently smooth function, and g R ( x ) = C T   P ˜ n a ( x ) denote the shifted Jacobi poly-fractonomials in the expansion of g(x), where
C = [ c 0 , c 1 , c R ] T ,   and c r = 2 2 a + 1   Y n a , a 0 1 g ( x )   w ( x )   ˜ P ˜ r a ( x )   d x .
Theorem 2.
Let g ( x ) L 2 [ 0 , 1 ]   C [ 0 , 1 ] , and suppose sup | g ( x ) | £ , then Jacobi –approximation of g(x) given by Equation (18) converges uniformly, and we have
| | c r | | £   2 a ( 2 n + 1 ) ( n ! ) 2   Γ ( 1 a ) Γ ( n + a + 1 )   Γ ( 2 a + n ) Γ ( 1 a + n ) .  
Proof. 
A function g ( x ) L 2 [ 0 , 1 ]   C [ 0 , 1 ] can be written by Equation (18), and the coefficient is determined by
c r = 1 | | P ˜ r a ( x ) | | 2 2 0 1 g ( x )   w ( x )   ˜   P ˜ r a ( x ) d x ,
| c r |   1 | | P ˜ r a ( x ) | | 2 2 s u p | g ( x ) | 0 1 |   w ( x )   ˜ P ˜ r a ( x ) | d x .  
Since sup | g ( x ) | £ , so from Equation (23) we get,
| c r | £ | | P ˜ r a ( x ) | | 2 2 0 1 |   w ( x )   ˜   P ˜ r a ( x ) | d x ,
£   | |   P ˜ r a ( x ) | | 2 2 0 1 |   w ( x )   ˜   P ˜ r a ( x ) | d x .  
Substituting the value of w(x) and P ˜ r a ( x ) in Equation (24), we have
| c r | £   2 a | | P ˜ r a ( x ) | | 2 2   r = 0 n A [ r , a , n ] 0 1 | ( 1 x ) a ( x ) a ( x 1 ) r ( x ) a | d x ,
where A [ r ,   a ,   n ] =   r = 0 n ( n r )   Γ ( n + r + 1 )   Γ ( n a + 1 ) Γ ( n + 1 )   Γ ( r a + 1 ) .
| c r | £   2 a | |   P ˜ r a ( x ) | | 2 2   r = 0 n A [ r , a , n ] ( 1 ) r ( 1 a + r )   .  
Now, substituting the value of | | P ˜ r a ( x ) | | 2 2 from Equation (13) and A [ r ,   a ,   n ] from Equation (25) in Equation (26), we have
£   2 a   2 2 a + 1 Y n a , a   r = 0 n ( n r )   Γ ( n + r + 1 )   Γ ( n a + 1 ) Γ ( n + 1 )   Γ ( r a + 1 ) ( 1 ) r ( 1 a + r )   , £   2 a   2 2 a + 1     Γ ( n a + 1 )   Γ ( 1 a )   Γ ( n + 1 ) ( 2 n + 1 ) Γ ( n + 1 ) Γ ( n + 1 ) Γ ( n + 1 )   Γ ( 1 a + n )   Γ ( 2 a + n ) 2 Γ ( n a + 1 ) Γ ( n + a + 1 )   , | | c r | | £   2 a     Γ ( 1 a )   ( 2 n + 1 ) ( n ! ) 2 Γ ( n + a + 1 )   Γ ( 1 a + n ) Γ ( 2 a + n )   .
From this calculation, we observe that the partial sum of the coefficient is bounded so r = 0 n c r converges absolutely, and hence, r = 0 n c r   P ˜ r a ( x )   converges uniformly to g(x). □

2. Collocation Method for GFIDEs

In this part, we describe the collocation method for solving GFIDEs given by Equations (5) and (6). Collocation methods are based on a projection method in which we take a finite dimensional basis to express the approximate solution which is believed to be closed to the true solution. With the help of this family of functions, we approximate the solution of the GFIDEs given by Equations (5) and (6).
Using Equation (18), we now approximate y(x) as,
y R ( x ) = r = 0 R c r   P ˜ r a ( x )   ,  
where cr are unkown expansion coefficients, which are to be determined. It should be noted that the approximate solution yR(x) satisfies the homogeneous initial condition. Replacing the exact solution y(x) by approximate solution yR(x) in Equation (5), we get
( B P α y R ) ( x ) = ( H y R ) ( x ) , 0 < α < 1 ,  
and   y R ( 0 ) = y 0 .  
If we are given nonhomogenous initial conditions, y R ( 0 ) = y 0 ≠ 0, then we will first make it homogenous by the transformation y R ( x )   ˜ =   y ( x ) y 0 and then replace y(x) by y R ( x )   ˜ . This transformation was considered as a Jacobi Poly-Fractonomial basis, and is defined according to homogenous initial conditions.
From Equations (28) and (29), we have
B P α ( r = 0 R c r   P ˜ r a ( x ) ) = H ( ( r = 0 R c r   P ˜ r a ( x ) ) )   ,  
r = 0 R c r   P ˜ r a ( x 0 ) = y 0 .  
To apply collocation method, the node points x t I = [ 0 , 1 ] , are chosen such that
B P α ( r = 0 R c r   P ˜ r a ( x t ) ) = H ( ( r = 0 R c r   P ˜ r a ( x t ) ) ) ,   t = 0 ,   1 , 2 , R 1 ,
and   r = 0 R c r   P ˜ r a ( 0 ) = y 0 .  
Equations (32) and (33) form a system of linear equations with unknown coefficients {cr}. We solve this system using any standard method to find the coefficient {cr}. Hence, the approximate solution is obtained.

3. Convergence Analysis

In this part, we calculate the convergence analysis of the proposed method. For this, we approximate the function by its derivative and try to show that its infinite sum is bounded. To study the convergence analysis of the presented method for solving GFIDEs, we will use the following Lemmas:
Lemma 1
[29].Let X = L 2 ( I ) denote the vector space of square-integrable functions on I = [ 0 , 1 ] and ξ be a Volterra integral operator on X defined by
ξ   ( g ( x ) ) = 0 x ρ ( x , t ) g ( t ) d t     g X ,
with kernel   ρ ( x , t ) satisfying 0 1 0 1 | ρ ( x , t ) | d x d t = L 2 or   x , t s u p ρ ( x , t ) = L , where L is a constant.
Then K is bounded in L 2 ( I ) .That is,
| |   ξ ( g ( x ) ) | | 2 L | |   g | | 2 .
Lemma 2.
Let y(x) be a sufficiently differentiable function in L 2 ( I ) , and ( d y R d x ) be the approximation of d y d x . Assume that d y d x is bounded by a constant C, i.e.,   | d y d x | C , then we have
d y d x ( d y R d x ) 2 2   ( C ) 2 Γ ( 1 a )   ( 1 + 2 R )   ( R ! ) 2   ( 1 + a R ) Γ ( 1 a + R )   Γ ( 2 a + R )   Γ ( 1 + a + R ) .
Proof .
Let
d y d x = r = 0 c r   P ˜ r a ( x ) .  
Taking the sum of the above series up to R − 1 level, and replacing the exact solution by approximate solution, we get
( d y R d x ) = r = 0 R 1 c r   P ˜ r a ( x ) .
Subtracting Equation (37) from the Equation (36), we get
d y d x ( d y R d x ) = r = R c r   P ˜ r a ( x ) ,
d y d x ( d y R d x ) 2 2 = 0 1 ( d y d x ( d y R d x ) ) 2 d x = 0 1 ( r = R c r   P ˜ r a ( x ) ) 2 d x ,
or,
d y d x ( d y R d x ) 2 2 = r = R c r 2   2 2 a + 1 Y r a , a .  
From Equation (36), we get
c r = 2 2 a + 1   Y r a , a 0 1 d y d x   P ˜ r a ( t )   w ( t )   ˜ d t ,  
Substituting the value of P ˜ r a ( t )   from Equation (13) and w ( t )   ˜ = ( 1 t ) a   ( t ) a in Equation (39), we get
c r   2 2 a + 1   Y r a , a C 0 1   P ˜ r a ( t )   w ( t )   ˜ d t   C   2 a     Γ ( 1 a )   ( 2 r + 1 ) (   Γ ( r + 1 )   ) 2 Γ ( r + a + 1 )   Γ ( 1 a + r )   Γ ( 2 a + r ) , | c r | 2 (   C   2 a     Γ ( 1 a )   ( 2 r + 1 )   ( r ! ) 2 Γ ( r + a + 1 )   Γ ( 1 a + r )   Γ ( 2 a + r ) ) 2 .  
Thus,
r = R c r 2   2 2 a + 1 Y r a , a   Γ ( 1 a ) r = R ( C ) 2   ( r ! ) 2   ( 2 r + 1 ) Γ ( r + a + 1 )   Γ ( 1 a + r )   Γ ( 2 a + r ) ,
r = R c r 2   2 2 a + 1 Y r a , a     ( C ) 2 Γ ( 1 a )   ( 1 + 2 R )   ( R ! ) 2   ( 1 + a R ) Γ ( R + a + 1 )   Γ ( 1 a + R )   Γ ( 2 a + R ) ,     0 < a < 1 .
which completes the proof of Lemma 2. □

4. Error Analysis

In this part, we estimate the error analysis by considering the different cases. In the linear case, it is done by calculating exact and approximate solution. In the nonlinear case, first we prove that it satisfies the Lipschitz condition, and then apply the usual process to estimate the error.
Let E R ( x ) = y ( x ) y R ( x ) be the error function, where y(x) is exact solution and yR is the approximate solution. From Equation (5), we get,
( B P α y R ) ( x ) = H ( y R ( x ) ) = φ ( x ) + g ( x ) y R ( x ) + 0 x ρ ( x , t ) G ( y R ( t ) ) d t   ,  
subtracting Equation (42) from the Equation (5), and after simplifying, we get,
g ( x ) ( y ( x ) y R ( x )   ) = 0 x ρ ( x , t ) G ( y ( t ) y R ( t )   ) d t ( B P α ( y y R ) ) ( x ) .  
After substituting all values,
g ( x ) ( E R ( x ) ) = 0 x ρ ( x , t ) G ( E R ( t ) ) d t ( B P α ( y y R ) ) ( x ) ,  
| g ( x ) E R ( x ) | | 0 x ρ ( x , t ) G ( E R ( t ) ) d t | + | ( B P α ( y y R ) ) ( x ) | ,  
Here, we consider two case for the function G .
Case 1.
When G satisfies linearity condition, we have,
| g ( x ) E R ( x ) | Q | 0 x E R ( t ) d t | + | ( B P α ( y y R ) ) ( x ) |   ,
where Q = max ρ ( x , t ) .
Now, by using Gronwall’s inequality,
g ( x ) E R ( x ) 2 ( B P α ( y y R ) ) ( x ) 2 .  
Now ,   ( B P α ( y y R ) ) ( x ) 2 K 1 2 + K 2 2 ,  
where K1 and K2 are defined by
K 1 = r 0 x w 1 α ( x , t ) D ( y ( t ) y R ( t ) ) d t ,   and   K 2 = s x 1 w 1 α ( t , x ) D ( y ( x ) y R ( x ) ) d x .
Since   w 1 α ( x , t ) L 2 , then by Lemma 1, there are constants k 1 , k 2 such that,
K 1 2 k 1 D ( y ( x ) y R ( x ) ) 2 ,
and
K 2 2 k 2 D ( y ( x ) y R ( x ) ) 2 . Thus ,   ( B P α ( y y R ) ) ( x ) 2 Λ D ( y ( x ) y R ( x ) ) 2 ,   Λ = k 1 + k 2 .  
Using Lemma 2,
( B P α ( y y R ) ) ( x ) 2 Λ ( C ) 2 Γ ( 1 a )   ( 1 + 2 R )   ( R ! ) 2   ( 1 + a R ) Γ ( R + a + 1 )   Γ ( 1 a + R )   Γ ( 2 a + R ) .  
From Equations (46) and (49), we have,
g ( x ) E R ( x ) 2 k ( C ) 2 Γ ( 1 a )   ( 1 + 2 R )   ( R ! ) 2   ( 1 + a R ) Γ ( R + a + 1 )   Γ ( 1 a + R )   Γ ( 2 a + R ) .  
Since g ( x ) 0 , therefore E R ( x ) 0 or y ( x ) y R ( x ) as R .
Case 2.
When G is nonlinear.
Lipschitz condition: A function f ( x ,   y ) satisfies a Lipschitz condition in the variable y on a set D subset of R2 if there is a constant L > 0, such that,
| f ( x ,   y 1   ) f ( x ,   y 2 ) |   L | y 1 y 2 |   .
We assume that G satisfies the Lipschitz condition in variable y, so,
| G ( y 1 ( t ) ) G ( y 2 ( t ) ) | L | y 1 ( t ) y 2 ( t ) | ,  
where L is a Lipschitz constant.
From Equation (43), we have
| g ( x ) E R ( x ) | | 0 x ρ ( x , t ) G ( E R ( t ) ) d t | + | ( B P α ( y y R ) ) ( x ) | ,
or
| g ( x ) E R ( x ) | L   Q | 0 x E R ( t ) d t | + | ( B P α ( y y R ) ) ( x ) | .  
Now, following similar steps as those discussed for Equation (44), the convergence bound can be proved, as in Equation (51).

Error Estimate

In this section, we discuss the calculation of the error of the given problem. Let E R ( x ) = y ( x ) y R ( x ) denote the error function of yR(x) to the exact solution y(x). Replacing y(x) by the approximate solution yR(x) in Equation (5), we obtain
( B P α γ R ) ( x ) + y R ( x ) = φ ( x ) + g ( x ) y R ( x ) + 0 x ρ ( x , t ) G ( y R ( t ) ) d t ,  
with y ( 0 ) = ( y ) R .
Here, yR(x) is the perturbation function that can be calculated as
y R ( x ) = φ ( x ) + g ( x ) y R ( x ) + 0 x ρ ( x , t ) G ( y R ( t ) ) d t ( B P α y R ) ( x ) .
Subtracting Equation (52) from Equation (5), we get
( B P α E R ) ( x ) + y R ( x ) = φ ( x ) + g ( x ) E R ( x ) + 0 ξ ρ ( x , t ) G ( E R ( t ) ) d t ,
or
( B P α E R ) ( x ) = φ ( x ) + g ( x ) E R ( x ) + 0 x ρ ( x , t ) G ( E R ( t ) ) d t y R ( x ) ,  
with the initial condition E R ( 0 ) = ( E 0 ) R .
Equation (54) can be solved by applying general methods, as discussed in the Section 4.

5. Numerical Examples

To verify the theoretical approximation of the discussed problem, we consider convolution type kernels in GFIDE. Jacobi poly-fractonomials are considered as a basis to find the approximate solution. We calculate the maximum absolute error by changing the number of elements in the basis in the collocation method. In all numerical results, the number of basis elements and maximum absolute error are denoted by R and MAE respectively. The simulation results and figures are performed on a computer with (a) RAM: 4.00 GB (3.88 GB usable), and (b) System type: 64-bit Operating System, x64-based processor, running on MATLAB R2015a (The MathWorks, Inc., Natick, MA, USA).
Example 1.
Consider the problem with α = 2 3   ,   n = 1 ,
0 x { a + ( 1 a ) ( x t ) } α Γ ( α ) D n ( y ( t ) ) d t + x b { a + ( 1 a ) ( t x ) } α Γ ( α ) D n ( y ( t ) ) d t y ( x )   0 x   { ( x t ) } 1 / 2 G ( y ( t ) ) d t = x 2 16 x 5 2 15 + 9 x 4 3 2 Γ ( 1 3 ) + 3 ( 1 x ) 1 3 ( 1 + 3 x ) 2 Γ ( 1 3 )   ,
withy(0) = 0 This GFIDE has exact solution is y ( x ) = x 2 for a = 0.
In this Example, the exact solution of the problem is given as the second degree polynomial, so in the approximation, the basis R = 2, 3, 4 is chosen, and the corresponding maximum absolute error is calculated for the different choices of basis; these findings are shown in the Table 1. We also observe the variation in the approximate solution for the different values of a = 1 4 ,   1 8   ,   1 16 ,   1 32   , as shown in Figure 1 The MAE is calculated and the graph of the error is shown in Figure 2. We observe that when a tends to zero, the error is reduced and the approximate solution approaches the exact solution.
Example 2.
Consider the equation with α = 1 4 , n = 1,
0 x ( 1 α ) ( x t ) D n ( y ( t ) ) d t + x b ( 1 α ) ( t x ) D n ( y ( t ) ) d t y ( x )   0 x   { ( x t ) } 1 / 3 G ( y ( t ) ) d t = 17 16 3 x 2 x 2 x 3 2 + 3 x 4 8 27 440 x 8 / 3 ( 11 + 9 x ) ,
with initial condition y ( 0 ) = 0 and exact solution x 2 + x 3 .
We solved this problem by choosing a different number of basis elements R = 2, 3, 4. The corresponding MAE for the different value of   R are presented in the Table 1, and the corresponding graph is shown in Figure 3 for R = 3. The approximated solutions of Example 2   are presented in Figure 4 by taking the different value of α = 1 4 ,   1 8   ,   1 16 ,   1 32 and 1 64 . For α = 1 4 , the numerical solution overlaps the exact solution. Figure 5 shows the relationship between the exact and the approximate solutions of the present example. It is clear that whenever α approaches 1 4 , the numerical solution converges to the exact solution. Table 2 represents the comparison of the MAEs with the method proposed in [29].
Example 3.
Consider the nonlinear case of the problem Equations (5)(6) with α = 1 2 , n = 1,
0 x ( ( x t ) α Γ ( α ) D n ( y ( t ) ) d t     q ( x ) . y ( x ) 0 x   { ( x ) } 1 / 2 G ( y ( t ) ) d t = 2 A r c S i n h ( x ) π 1 + x x ( x + ( 1 + x ) l n ( 1 + x ) )     l n ( 1 + x ) ( 2 x + 2 x 3 / 2 ( x + x 3 / 2 ) l n ( 1 + x ) ) ,
with y ( 0 ) = 0 , q ( x ) = 2 x + 2 x 3 / 2 ( x + x 3 / 2 ) l n ( 1 + x ) and the exact solution is l n ( 1 + x ) .
In this case, the approximation solution is obtained by selecting different values of R = 2, 3, 4, 5, 6, 7, 8, 9, 10 & 11. MAEs corresponding to these cases are calculated, and the corresponding MAE comparison with other polynomials is shown in Table 3. We also plotted the graph of MAE versus R, as shown in Figure 6. It can be observed that as R increases, MAE tends to zero. A graph between the exact and approximate solutions is presented in Figure 7.
Example 4.
Consider the problem:
0 x { d + ( 1 d ) ( x t ) } α Γ ( α )   D n ( y ( t ) ) d t + x b { d + ( 1 d ) ( t x ) } α Γ ( α ) D n ( y ( t ) ) d t y ( x ) 0 x   { ( x t ) } 1 2 G ( y ( t ) ) d t   x x 3 4 105 x   3 2 ( 35 + 24 x 2 ) + 3 ( x   1 3 + 27 x 7 3 14 ) Γ (   1 3 ) + 3 ( ( 1 x )   1 3 + 3 14 ( 1 x )   1 3 ( 2 + 3 x + 9 x 2 ) ) Γ (   1 3 ) ,
with α = 2 3 , n = 1 , y ( 0 ) = 0 . The exact solution of this problem is x 3 + x .
Here, as in previous cases, we calculate the MAE corresponding to different values of R = 3, 4, 5; the case for R = 4 is shown in Figure 8 . We also depict the behavior of the solution by changing the value of d = 1 4 ,   1 8   ,   1 16 ,   1 32 and 1 64 . Furthermore, as d tends to zero, the approximate solution converges to exact solution, which is shown in Figure 9. In Figure 5, a graph of the exact and the approximate solutions is given.
Example 5.
Consider,
0 x ( 1 α ) ( x t )   D n ( y ( t ) ) d t + x 1 ( 1 α ) ( t x ) D n ( y ( t ) ) d t   y ( x )     0 x   { ( x t ) } 1 3 G ( y ( t ) ) d t = e x 3 ( 3 + x + e 1 x ( 1 + x ) ) 4 e   +   3 4 ( 1 e x ( 1 + x ) )     5 ( 1 ) 1 3 e x   Γ ( 2 3 ) ( 3 e x ( x ) 5 3 ( 2 + 3 x ) ( Γ ( 5 3 ) Γ ( 5 3   , x ) ) ) 9   Γ ( 8 3 ) .
This problem has the exact solution y ( x ) = x e x with y(0) = 0.
For the given problem, we find the approximate solution by varying the number of elements in the basis and also calculating the corresponding MAEs which are shown in Table 4. The graph of MAE verses R is shown in Figure 10. We notice that the MAE decreases as we increase R, and for R ≥ 11, no change in MAE is observed. In Figure 7, a graph of the exact and the approximate solutions is shown. We also plotted a graph for the MAE for R = 12, as shown in Figure 11.

6. Conclusions

A convergent collocation method is developed in this paper with which to solve GFIDEs in terms of the B-operator. Jacobi poly-fractonomials are used as the basis in the proposed collocation method. The choice of the Jacobi poly-fractonomials helps to increase the accuracy in the approximated solution. The presented method works well on linear and nonlinear types of the GFIDEs and produces accurate solutions.

Author Contributions

Conceptualization, R.K.P.; Formal analysis, S.K.; Funding acquisition, H.M.S.; Methodology, S.K.; Software, S.K.; Supervision, R.K.P. and G.N.S.; Writing—original draft, S.K.; Writing—review & editing, R.K.P. and H.M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Authors thank the reviewers for their comments to improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Srivastava, H.M. Fractional-order derivatives and integrals: Introductory overview and recent developments. Kyungpook Math. J. 2020, 60, 73–116. [Google Scholar]
  2. Kreyszig, E. Introductory Functional Analysis with Applications; Wiley: New York, NY, USA, 1978. [Google Scholar]
  3. Podlubny, I. Fractional Differential Equations of Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999; Volume 198. [Google Scholar]
  4. Sabatier, J.; Agrawal, O.P.; Machado, J.T. Advances in Fractional Calculus; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4. [Google Scholar]
  5. Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974; Volume 111. [Google Scholar]
  6. Oldham, K.B.; Spanier, J. The Fractional Calculus, Vol. of Mathematics in Science and Engineering; Academic Press: New York, NY, USA; London, UK, 1974; Volume 111. [Google Scholar]
  7. Miller, K.S.; Ross, B. An introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: New York, NY, USA, 1993. [Google Scholar]
  8. Magin, R.L. Fractional calculus in bioengineering, part 2. Critical reviews TM. Biomed Eng. 2004, 32, 195–377. [Google Scholar]
  9. Goodrich, C.S. Existence of a positive solution to a system of discrete fractional boundary value problems. Appl. Math. Comput. 2011, 217, 4740–4753. [Google Scholar] [CrossRef]
  10. Oldham, K.B. Fractional differential equations in electrochemistry. Adv. Eng. Softw. 2010, 41, 9–12. [Google Scholar] [CrossRef]
  11. Engheta, N. On fractional calculus and fractional multipoles in electromagnetism. IEEE Trans. Antennas Propag. 1996, 44, 554–566. [Google Scholar] [CrossRef]
  12. Saadatmandi, A.; Dehghan, M. A Legendre collocation method for fractional integro-differential equations. J. Vib. Control 2011, 17, 2050–2058. [Google Scholar] [CrossRef]
  13. Bagley, R.L.; Torvik, P.J. Fractional calculus in the transient analysis of viscoelastically damped structures. AIAA J. 1985, 23, 918–925. [Google Scholar] [CrossRef]
  14. Tarasov, V.E. Fractional integro-differential equations for electromagnetic waves in dielectric media. Theor. Math. Phys. 2009, 158, 355–359. [Google Scholar] [CrossRef] [Green Version]
  15. Angell, J.; Olmstead, W.E. Singular perturbation analysis of an integro -differential equation modelling filament stretching. Z. Für Angew. Math. Und Phys. Zamp. 1985, 36, 487–490. [Google Scholar] [CrossRef]
  16. Khosro, S.; Machado, J.T.; Masti, I. On dual Bernstein polynomials and stochastic fractional integro-differential equations. Math. Methods Appl. Sci. 2020, 43, 9928–9947. [Google Scholar]
  17. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  18. Yanxin, W.; Zhu, L.; Wang, Z. Fractional-order Euler functions for solving fractional integro-differential equations with weakly singular kernel. Adv. Differ. Equ. 2018, 2018, 254. [Google Scholar]
  19. Saadatmandi, A.; Dehghan, M. A new operational matrix for solving fractional-order differential equations. Comput. Math. Appl. 2010, 59, 1326–1336. [Google Scholar] [CrossRef] [Green Version]
  20. Kumar, K.; Pandey, R.K.; Sharma, S. Numerical Schemes for the Generalized Abel’s Integral Equations. Int. J. Appl. Comput. Math. 2018, 4, 68. [Google Scholar] [CrossRef]
  21. Bonilla, B.; Rivero, M.; Trujillo, J.J. On systems of linear fractional differential equations with constant coefficients. Appl. Math. Comput. 2007, 187, 68–78. [Google Scholar] [CrossRef]
  22. Momani, S.; Aslam Noor, M. Numerical methods for fourth-order fractional integro-differential equations. Appl. Math. Comput. 2006, 182, 754–760. [Google Scholar] [CrossRef]
  23. Momani, S.; Qaralleh, R. An efficient method for solving systems of fractional integro-differential equations. Comput. Math. Appl. 2006, 52, 459–470. [Google Scholar] [CrossRef] [Green Version]
  24. Bhrawy, A.; Zaky, M. Shifted fractional-order Jacobi orthogonal functions: Application to a system of fractional differential equations. Appl. Math. Model. 2016, 40, 832–845. [Google Scholar] [CrossRef]
  25. Kojabad, E.A.; Rezapour, S. Approximate solutions of a sum-type fractional integro-differential equation by using Chebyshev and Legendre polynomials. Adv. Differ. Equ. 2017, 2017, 1–18. [Google Scholar]
  26. Rawashdeh, E.A. Numerical solution of fractional integro-differential equations by collocation method. Appl. Math. Comput. 2006, 176, 1–6. [Google Scholar] [CrossRef]
  27. Syam, M.; Al-Refai, M. Solving fractional diffusion equation via the collocation method based on fractional Legendre functions. J. Comput. Methods Phys. 2014, 2014. [Google Scholar] [CrossRef] [Green Version]
  28. Eslahchi, M.R.; Dehghan, M.; Parvizi, M. Application of the collocation method for solving nonlinear fractional integro-differential equations. J. Comput. Appl. Math. 2014, 257, 105–128. [Google Scholar] [CrossRef]
  29. Sharma, S.; Pandey, R.K.; Kumar, K. Collocation method with convergence for generalized fractional integro-differential equations. J. Comput. Appl. Math. 2018, 342, 419–430. [Google Scholar] [CrossRef]
  30. Odibat, Z.M. Analytic study on linear systems of fractional differential equations. Comput. Math. Appl. 2010, 59, 1171–1183. [Google Scholar] [CrossRef] [Green Version]
  31. Ma, J.; Liu, J.; Zhou, Z. Convergence analysis of moving finite element methods for space fractional differential equations. J. Comput. Appl. Math. 2014, 255, 661–670. [Google Scholar] [CrossRef]
  32. Raslan, K.R.; Ali, K.K.; Mohamed, E.M. Spectral Tau method for solving general fractional order differential equations with linear functional argument. J. Egypt. Math. Soc. 2019, 27, 33. [Google Scholar] [CrossRef] [Green Version]
  33. Zurigat, M.; Momani, S.; Odibat, Z.; Ahmad, A. The homotopy analysis method for handling systems of fractional differential equations. Appl. Math. Model. 2010, 34, 24–35. [Google Scholar] [CrossRef]
  34. Hassani, H.; Machado, J.T.; Naraghirad, E.; Sadeghi, B. Solving nonlinear systems of fractional-order partial differential equations using an optimization technique based on generalized polynomials. Comput. Appl. Math. 2020, 39, 1–19. [Google Scholar] [CrossRef]
  35. Agrawal, O.P. Generalized variational problems and Euler–Lagrange equations. Comput. Math. Appl. 2010, 59, 1852–1864. [Google Scholar] [CrossRef] [Green Version]
  36. Zayernouri, M.; Karniadakis, G.E. Fractional Sturm–Liouville eigen-problems: Theory and numerical approximation. J. Comput. Phys. 2013, 252, 495–517. [Google Scholar] [CrossRef]
Figure 1. Plot of approximate solutions for different values of a and R = 2 for Example 1.
Figure 1. Plot of approximate solutions for different values of a and R = 2 for Example 1.
Mathematics 09 00979 g001
Figure 2. Plot of MAE of Example 1 for R = 2, and α = 2 3 .
Figure 2. Plot of MAE of Example 1 for R = 2, and α = 2 3 .
Mathematics 09 00979 g002
Figure 3. Plot of MAE of Example 2 for α = 1 4 , and R = 3.
Figure 3. Plot of MAE of Example 2 for α = 1 4 , and R = 3.
Mathematics 09 00979 g003
Figure 4. Plot of approximate solutions for different values of α for Example 2.
Figure 4. Plot of approximate solutions for different values of α for Example 2.
Mathematics 09 00979 g004
Figure 5. Plot of approximate and exact solutions of Examples 2 and 4.
Figure 5. Plot of approximate and exact solutions of Examples 2 and 4.
Mathematics 09 00979 g005
Figure 6. Plot of MAE for different values of R.
Figure 6. Plot of MAE for different values of R.
Mathematics 09 00979 g006
Figure 7. Plot of exact versus approximate solution of Examples 5 and 3.
Figure 7. Plot of exact versus approximate solution of Examples 5 and 3.
Mathematics 09 00979 g007
Figure 8. Plot of MAE of Example 4 for α = 2 3 , and R = 4.
Figure 8. Plot of MAE of Example 4 for α = 2 3 , and R = 4.
Mathematics 09 00979 g008
Figure 9. The numerical solutions of Example 4 for different values of d.
Figure 9. The numerical solutions of Example 4 for different values of d.
Mathematics 09 00979 g009
Figure 10. Plot of MAE for different values of R.
Figure 10. Plot of MAE for different values of R.
Mathematics 09 00979 g010
Figure 11. Plot of MAE Example 5 for R = 12.
Figure 11. Plot of MAE Example 5 for R = 12.
Mathematics 09 00979 g011
Table 1. MAE of Example 1 and Example 2 for different values of R.
Table 1. MAE of Example 1 and Example 2 for different values of R.
RExample 1Example 2
2 5.55112 × 10 17 5.55112 × 10 17
3 2.77556 × 10 17   5.55112 × 10 17
4 8.32667 × 10 17 3.33067 × 10 16
Table 2. Comparision of MAEs for Example 2 with the method proposed in [29] for different values of R .
Table 2. Comparision of MAEs for Example 2 with the method proposed in [29] for different values of R .
RPresent MethodMethod [29]
2 5.55112 × 10 17 2.2891 × 10 2
3 5.55112 × 10 17 2.0455 × 10 16
Table 3. Comparision of MAEs for Example 3 with the method given in [29] for different values of R.
Table 3. Comparision of MAEs for Example 3 with the method given in [29] for different values of R.
RPresent MethodMethod [29]
3 2.3832 × 10 3 2.2891 × 10 3
4 4.3205 × 10 4 2.0455 × 10 5
5 3.7277 × 10 5 4.0699 × 10 5
6 1.8326 × 10 6 4.1765 × 10 6
Table 4. MAEs of Examples 5 and 3 for different values of R.
Table 4. MAEs of Examples 5 and 3 for different values of R.
RExample 3Example 5
3 2.3832 × 10 3 3.35215 × 10 3
4 4.3205 × 10 4 4.78585 × 10 5
5 3.7277 × 10 5 6.86289 × 10 6
6 1.8326 × 10 6 3.02317 × 10 7
7 2.29402 × 10 7 1.23070 × 10 8
8 1.2832 × 10 8 4.513474 × 10 10
9 1.41703 × 10 9 4.58358 × 10 11
10 6.19955 × 10 10 1.906378 × 10 12
11 2.467 × 10 10 1.68061 × 10 13
12 3.2156 × 10 10 1.22863 × 10 13
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumar, S.; Pandey, R.K.; Srivastava, H.M.; Singh, G.N. A Convergent Collocation Approach for Generalized Fractional Integro-Differential Equations Using Jacobi Poly-Fractonomials. Mathematics 2021, 9, 979. https://doi.org/10.3390/math9090979

AMA Style

Kumar S, Pandey RK, Srivastava HM, Singh GN. A Convergent Collocation Approach for Generalized Fractional Integro-Differential Equations Using Jacobi Poly-Fractonomials. Mathematics. 2021; 9(9):979. https://doi.org/10.3390/math9090979

Chicago/Turabian Style

Kumar, Sandeep, Rajesh K. Pandey, H. M. Srivastava, and G. N. Singh. 2021. "A Convergent Collocation Approach for Generalized Fractional Integro-Differential Equations Using Jacobi Poly-Fractonomials" Mathematics 9, no. 9: 979. https://doi.org/10.3390/math9090979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop