Next Article in Journal
Finite-Approximate Controllability of Riemann–Liouville Fractional Evolution Systems via Resolvent-Like Operators
Next Article in Special Issue
Qualitative Analysis of Langevin Integro-Fractional Differential Equation under Mittag–Leffler Functions Power Law
Previous Article in Journal
On the Design of Power Law Filters and Their Inverse Counterparts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximate Analytical Solutions for Systems of Fractional Nonlinear Integro-Differential Equations Using the Polynomial Least Squares Method

Department of Mathematics, Politehnica University of Timişoara, 300006 Timişoara, Romania
Fractal Fract. 2021, 5(4), 198; https://doi.org/10.3390/fractalfract5040198
Submission received: 7 October 2021 / Accepted: 2 November 2021 / Published: 4 November 2021
(This article belongs to the Special Issue Numerical Methods for Solving Fractional Differential Problems)

Abstract

:
We employ the Polynomial Least Squares Method as a relatively new and very straightforward and efficient method to find accurate approximate analytical solutions for a class of systems of fractional nonlinear integro-differential equations. A comparison with previous results by means of an extensive list of test-problems illustrate the simplicity and the accuracy of the method.

1. Introduction

The notion of a fractional derivative as a derivative of any arbitrary real or complex order has a long history, starting with the works of early titans of mathematics such as Leibniz, Abel, Liouville and Heaviside. The theory and applications of the fractional calculus expanded steadily during the 20th century and exploded in the recent decades due to the numerous applications of the fractional differential equations in various field of science and engineering, such as thermal engineering, acoustics, electromagnetism, control, robotics, viscoelasticity, signal processing etc.
The fractional integro-differential equations are in the present the focus of intensive research due to their pivotal role in the modeling of many phenomena and processes from science and engineering such as, for example, electric-circuit analysis, floating structures and viscoelastic material dynamics.
The class of systems of equations studied in this paper is:
D α k x k ( t ) = F k t , x 1 ( t ) , , x n ( t ) , a b K f k ( t , s , x 1 ( s ) , , x n ( s ) ) d s , a t K v k ( t , s , x 1 ( s ) , , x n ( s ) ) d s , q k 1 < α k q k , q k N * , k = 1 , , n , n N * ,
together with a set of conditions of the type:
j = 0 r 1 k = 1 n α i j k · x k ( j ) ( a ) + k = 1 n β i j k · x k ( j ) ( b ) = μ i , i = 0 , , r 1 , r N *
Here, for q N * , D α denotes the Caputo fractional derivative of order α , namely:
D α x ( t ) = 1 Γ ( q α ) 0 t ( t s ) q α 1 x q ( s ) d s , q 1 < α < q x q ( t ) , q = α
The kernel functions K f k , K v k and the functions F k are assumed to be suitably defined on the [ a , b ] interval such that the problem consisting of the Equation (1) together with the conditions (2) admits a solution.
This class of systems of equations is evidently a very general one since it includes both Fredholm and Volterra type equations, linear and nonlinear. Unfortunately, with the exception of a relatively small number of simple cases (such as most of the test problems included as examples), the exact solutions of a nonlinear integro-differential system of equations of the type (1) cannot be found. Thus numerical solutions or (preferably) approximate analytical solutions must be computed.
In recent years many methods and techniques to find approximate solutions for systems of integro-differential equations were proposed, among which we mention:
  • In 2006, Momani and Qaralleh used the Adomian Decomposition Method [1] to find approximate solutions for a type of system of equations similar to (1) but including only the Volterra term (and not the Fredholm one). The method decomposes the solution of the problem into a rapidly convergent series and replaces the nonlinear term by a series of the Adomian polynomials. The method worked well but unfortunately no error reports were included in the paper.
  • In 2009, Zurigat et al. employed the well-known Homotopy Analysis Method [2] and in 2020 Akbar et al. employed the Optimal Homotopy Analysis Method [3] to find approximate solutions for systems of fractional integro-differential equations also of Volterra type only. These methods are based on the concept of the homotopy from topology and generate a convergent series solution for nonlinear systems. The experimental results were good with low errors. An interesting feature of the Homotopy Analysis Method is the existence of the auxiliary parameters which can be adjusted to control the convergence region of the solution series. Unfortunately the best choices of these parameters are not always clear. On the other hand, in [2] the authors showed that for certain values of these parameters the Homotopy Analysis Method can also replicate the results obtained in [1] by means of the Adomian Decomposition Method.
  • In 2010, Saeed and Sdeq employed the widely used Homotopy Perturbation Method [4] to find approximate solutions for systems of linear Fredholm fractional integro-differential equations. The Homotopy Perturbation Method combines the traditional perturbation method and the concept of homotopy from topology and its best feature is that it does not require the existence of a small parameter regarding the perturbation. The errors corresponding to the approximations are presented and the convergence of the method is fast: from the experimental data it follows that the order of the error seems to be proportional to the number of terms considered in the sum of the series.
  • The Chebyshev Pseudo-Spectral Method, employed in 2013 by Khader and Sweilam [5] and in 2017 by Zedan et al. [6] to find approximate for a type of system of equations similar to (1), but including only Volterra integrals, uses the properties of Chebyshev polynomials to reduce the problem to a linear or non-linear system of algebraic equations. From the numerical data the convergence again appears fast, with an order of the error roughly proportional to the numbers of terms in the series sum.
  • In 2014, Bushnaq et al. presented the Reproducing Kernel Hilbert Space Method [7], which is a kernel-based approximation method, to find approximate solutions for systems of Volterra fractional integro-differential equations. The errors presented are relatively low but, while the absolute convergence of the method is proved, from the experimental data no information about the speed of the convergence can be extracted.
  • Chebyshev Wavelets Expansion Methods were used in 2014 by Heydary et al. [8] to solve systems of nonlinear singular fractional Volterra integro-differential equations, in 2018 by Zhou and Xu [9] and in 2021 by Bargamadi et al. [10] to solve fractional Volterra–Fredholm integro-differential equations. This category of methods utilizes Chebyshev wavelets as a basis and transforms the problem into a system of algebraic equations. These methods usually yield solutions with very low errors which converge relatively fast.
  • In 2015, Al-Marashi used a B-Spline Method [11] to find approximate solutions for systems of linear fractional integro-differential Volterra equations. The method uses B-Spline functions of different degrees to transform the problem into a system of linear equations. The examples show that the errors are relatively low and the convergence is illustrated by the decreasing of the errors with the increase in the degree.
  • In 2015, Khalil and Khan used the Shifted Legendre Polynomials Method [12] to solve a coupled system of linear Fredholm integro-differential equations. In this method the initial problem is transformed into a series of algebraic equations of the shifted Legendre expansion coefficients. The errors are low enough and the examples clearly illustrate the convergence of the method.
  • In 2015, Asgari introduced a method based on a new Operational Matrix of Triangular Functions [13]. The method is applied to a system of linear fractional integro-differential Volterra equations and transforms it by using triangular functions into a system of linear algebraic equations and the corresponding errors are sufficiently low.
  • In 2016, [14] and in 2018 [15] Deif and Grace used the Iterative Refinement Method to find approximate solutions for systems of linear fractional integro-differential Volterra and Fredholm equations.The method employs an approximate solution which is repeatedly updated based upon a computed residual to adjust the system input. The examples show low errors and relatively fast convergence.
  • Block-Pulse Functions Methods were applied in 2018 by Hesameddini and Shahbazi [16] and in 2019 by Xie and Yi [17] to find approximate solutions for systems of nonlinear fractional integro-differential Volterra and Fredholm equations. By means of Block–Pulse functions (or hybrid Bernstein Block–Pulse functions), the initial problem is reduced to a system of algebraic equations. The examples show relatively low errors and good convergence properties.
  • In 2018, Wang et al. employed the Bernoulli Wavelets Method [18] to solve coupled systems of nonlinear fractional integro-differential Volterra equations. The method transforms the problem into a system of algebraic equations by means of a Bernoulli wavelets basis expansion and the examples show relatively low errors and illustrate the convergence.
  • In 2019, Mohammed and Malik employed a Power Series Method [19] to find approximate solutions for a system of linear fractional integro-differential Volterra equations. The solution of the problem is computed approximately as a partial sum of a power series. The numerical results are in good agreement with results obtained by using other methods but unfortunately no clear information regarding accuracy and speed of convergence could be extracted from the examples.
  • In 2020, Didgar et al. used a Taylor Expansion Method [20] to solve systems of linear fractional integro-differential Volterra–Fredholm equations. The method transforms the problem into a system of linear equations by approximating the solutions via mth-order Taylor polynomials. The exact solutions of all the examples included are polynomial functions and due to its nature the method is able to find the exact solutions.
  • In 2020, Saemi et al. used a Müntz–Legendre Wavelets Method [21] to find approximate solutions for systems of nonlinear fractional integro-differential Volterra–Fredholm equations. Employing Müntz–Legendre wavelets, the method converts the system of integro-differential equations into a system of linear or nonlinear algebraic equations. The examples show low errors and good convergence properties.
  • In 2021, Duangpan et al. used the Finite Integration Method [22] to find approximate solutions for systems of linear fractional Volterra integro-equations by transforming them in systems of algebraic equations via Shifted Chebyshev Polynomials. The examples show very low errors and relatively fast convergence.
The rest of the paper is structured as follows: In Section 2, we present the Polynomial Least Squares Method (denoted from this point forward as PLSM), in Section 3, we present the results of an extensive testing process involving most of the usual test problems included in similar studies and in Section 4 we present the conclusions of the study.

2. The Polynomial Least Squares Method

In the following we will denote by the problem (1)+(2) the system of Equation (1) together with the conditions (2).
Let x ˜ 1 ( t ) , , x ˜ n ( t ) be a set of approximate solutions of the system (1). The overall error obtained by replacing the set of exact solutions x 1 ( t ) , , x n ( t ) with these approximations can be described by the remainder:
R ( t , x ˜ 1 , , x ˜ n ) = = k = 1 n D α k x ˜ k ( t ) F k t , x ˜ 1 ( t ) , , x ˜ n ( t ) , 0 1 K f k ( t , s , x ˜ 1 ( s ) , , x ˜ n ( s ) ) d s , 0 t K v k ( t , s , x ˜ 1 ( s ) , , x ˜ n ( s ) ) d s 2
We will find a set of approximate polynomial solutions of (1)+(2) on the [ a , b ] interval, solutions which satisfy the following conditions:
R ( t , x ˜ 1 , , x ˜ n ) < ε
j = 0 r 1 k = 1 n α i j k · x ˜ k ( j ) ( a ) + k = 1 n β i j k · x ˜ k ( j ) ( b ) = μ i , i = 0 , , r 1
Definition 1.
We call a set of ε-approximate polynomial solutions of the problem (1)+(2) a set of approximate polynomial solutions x ˜ 1 , , x ˜ n satisfying the relations (5) and (6).
Definition 2.
We call a set of weak δ-approximate polynomial solutions of the problem (1)+(2) a set of approximate polynomial solutions x ˜ 1 , , x ˜ n satisfying the relation a b R ( t , x ˜ 1 , , x ˜ n ) dt δ together with the initial conditions (6).
Definition 3.
For k = 1 , , n , we consider the sequences of polynomials P k m ( t ) = a k 0 + a k 1 t + + a k m t m , a k i R , i = 0 , 1 , , m , satisfying the conditions:
j = 0 r 1 k = 1 n α i j k · P k m ( j ) ( a ) + k = 1 n β i j k · P k m ( j ) ( b ) = μ i , i = 0 , , r 1
We call the sequences of polynomials P k m ( t ) convergent to the solution of the problem (1)+(2) if lim m R ( t , P 1 m , , P n m ) = 0 .
The following convergence theorem holds:
Theorem 1.
The necessary condition for the problem (1)+(2) to admit a set of sequences of polynomials P k m ( t ) convergent to the solutions of this problem is: lim m a b R ( t , T 1 m , , T n m ) dt = 0 , where T k m ( t ) are a set of weak ε-approximate polynomial solutions of the problem (1)+(2).
Proof. 
We will find a set of weak ε -polynomial solutions of the type:
T k m ( t ) ( t ) = l = 0 m c k l · t l , k = 1 , , n ,
where the constants c k 0 , c k 1 , , c k m , k = 1 , , n are calculated using the steps outlined in the following.
  • By substituting the approximate solutions (7) in the system (1) we obtain the following expression:
    R c ( t , c k 0 , c k 1 , , c k m ) = R ( t , T 1 m ( t ) , , T n m ( t ) ) = = k = 1 n D α k x k ( t ) F k t , T 1 m ( t ) , , T n m ( t ) , a b K f k ( t , s , T 1 m ( t ) , , T n m ( t ) ) d s , a t K v k ( t , s , T 1 m ( t ) , , T n m ( t ) ) d s 2
    If we could find the constants c k 0 0 , c k 1 0 , , c k m 0 such that R c ( t , c k 0 0 , c k 1 0 , , c k m 0 ) = 0 for any t [ 0 , 1 ] and the equivalents of (2) are also satisfied, then by substituting c k 0 0 , c k 1 0 , , c k m 0 in (7) we obtain the exact solution of (1)+(2).
  • Next we attach to the problem (1)+(2) the following real functional:
    J ( c k r , c k ( r + 1 ) , , c k m ) = a b R c ( t , c k 0 , c k 1 , , c k m ) dt
    where c k 0 , c k 1 , . . . , c k ( r 1 ) may be computed as functions of c k r , c k ( r + 1 ) , , c k m by using the set of conditions.
  • Next we compute c k r 0 , c k ( r + 1 ) 0 , , c k m 0 as the values which give the minimum of the functional (9) and c k 0 0 , c k 1 0 , , c k ( r 1 ) 0 again as functions of c k n 0 , c k ( n + 1 ) 0 , , c k m 0 by using the conditions.
  • Using the constants c k 0 0 , c k 1 0 , , c k m 0 thus determined, we consider the set of polynomials:
    T k m ( t ) = k = 0 m c k l 0 · t l , k = 1 , , n
Based on the way the coefficients of the polynomials T m ( t ) are computed and taking into account the relations (7)–(9), the following inequalities hold:
0 a b R ( t , T 1 m ( t ) , , T n m ( t ) ) dt a b R ( t , P 1 m ( t ) , , P n m ( t ) ) dt , m N .
It follows that 0 lim m a b R ( t , T 1 m ( t ) , , T n m ( t ) ) dt lim m a b R ( t , P 1 m ( t ) , , P n m ( t ) ) dt = 0 .
We obtain lim m a b R ( t , T 1 m ( t ) , , T n m ( t ) ) dt = 0 .
From this limit we conclude that ε > 0 , m 0 N such that m N , m > m 0 it follows that T k m ( t ) , k = 1 , , n is a set of weak ε -approximate polynomial solutions of the problem (1)+(2). □
Remark 1.
Any set of ε-approximate polynomial solutions of the problem (1)+(2) is also a set of weak ε 2 · ( b a ) -approximate polynomial solutions, but the opposite is not always true. It follows that the set of weak approximate solutions of the problem (1)+(2) also contains the set of approximate solutions of the problem.
Taking into account the above remark, in order to find a set of ε -approximate polynomial solutions of the problem (1)+(2) by the Polynomial Least Squares Method, we will first determine weak approximate polynomial solutions, T 1 m ( t ) , , T n m ( t ) . If R ( t , T 1 m ( t ) , , T n m ( t ) ) < ε then T 1 m ( t ) , , T n m ( t ) is also a set of ε -approximate polynomial solutions of the problem.

3. Numerical Examples

In this section we present the results obtained by using PLSM for many (if not most) of the usual test-problems included in similar papers.
In the first application, we present the computations in detail including some remarks about the practical implementation of the method, while in the most of the subsequent applications we only present the results of the computations.

3.1. Application 1: System of Fractional Fredholm Integro-Differential Equations

The first application consists of the pair of equations [4,14,20]:
D 0.5 x 1 ( t ) = 0 1 ( x 1 ( s ) + x 2 ( s ) ) d s + 2 t π 5 6 D 1.5 x 2 ( t ) = 0 1 ( x 1 ( s ) x 2 ( s ) ) d s + 4 t π t 6
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 .
The exact solutions of this problem are x 1 e ( t ) = t , x 2 e ( t ) = t 2 .
We will follow the steps of the algorithm described in the Proof from the previous section. First, by choosing the polynomials x ˜ 1 ( t ) = c 11 · t + c 10 and x ˜ 2 ( t ) = c 22 · t 2 + c 21 · t + c 20 , from the initial conditions it follows that c 10 = 0 and c 20 = 0 , hence x ˜ 1 ( t ) = c 11 · t and x ˜ 2 ( t ) = c 22 · t 2 + c 21 · t .
The corresponding remainder (8) is:
R c ( t , c 11 , c 21 , c 22 ) = c 11 2 t 2 4 + 4 c 11 2 t π 2 c 11 2 t π + c 11 2 4 1 2 c 11 c 21 t 2 2 c 11 c 21 t π + c 11 c 21 2 4 c 11 c 22 t 3 / 2 π 1 3 c 11 c 22 t 2 4 c 11 c 22 t 3 π + c 11 c 22 3 + 4 c 11 t 3 / 2 π c 11 t 2 6 8 c 11 t π + 16 c 11 t 3 π 5 c 11 6 + c 21 2 t 2 4 + c 21 2 4 + 4 c 21 c 22 t 3 / 2 π + 1 3 c 21 c 22 t 2 + c 21 c 22 3 4 c 21 t 3 / 2 π + c 21 t 2 6 + 2 c 21 t π 5 c 21 6 + 8 c 22 2 t 3 / 2 3 π + c 22 2 t 2 9 + 16 c 22 2 t π + c 22 2 9 4 c 22 t 3 / 2 3 π + c 22 t 2 9 32 c 22 t π + 4 c 22 t 3 π 5 c 22 9 4 t 3 / 2 3 π + t 2 36 + 20 t π 10 t 3 π + 25 36 ,
and the corresponding functional (9) is:
J ( c 11 , c 21 , c 22 ) = 1 27 9 c 11 2 + 3 c 11 ( 3 c 21 + 2 c 22 8 ) + ( 3 c 21 + 2 c 22 ) 2 21 c 21 14 c 22 + 19 4 15 c 11 2 + c 11 ( 15 c 21 + 28 c 22 58 ) + c 21 ( 3 18 c 22 ) 4 c 22 ( 3 c 22 + 1 ) + 31 45 π + 2 ( ( c 11 2 ) c 11 + 4 ( c 22 2 ) c 22 + 5 ) π .
In order to find the minimum of the functional (9), we solve the system consisting of the equations d J d c 11 = 0 , d J d c 21 = 0 and d J d c 22 = 0 and we obtain the solution c 11 = 1 , c 21 = 0 , c 22 = 1 which is a set of stationary (critical, equilibrium) points for the functional. It is easy to show that this set corresponds indeed to a minimum of J and we obtain the values c 11 0 = 1 , c 21 0 = 0 , c 22 0 = 1 . From the initial conditions, obviously c 10 0 = 0 and c 20 0 = 0 and thus x ˜ 1 ( t ) = t and x ˜ 2 ( t ) = t 2 which means that PLSM is able to find, in a very simple manner, the exact solution of the problem. We remark that the previous methods in [4] (Homotopy Perturbation Method) and [14] (Iterative Refinement Method) were only able to find approximate solutions.
Regarding the practical implementation of the method, we wish to make the following remarks:
  • Regarding the choice of the degree of the polynomial approximation, in the computations we usually start with the lowest degree (i.e., first degree polynomial) and compute successively higher degree approximation, until the error (see next item) is considered low enough from a practical point of view for the given problem (or, in the case of a test problem, until the error is lower than the error corresponding to the solutions obtained by other methods). Of course, in the case of a test problem when the known solution is a polynomial, one may start directly with the corresponding degree, but this is just a shortcut and by no means necessary when using the method.
  • If the exact solution of the problem is not known, as would be the case of a real-life problem, and as a consequence the error can not be computed, then instead of the actual error we can consider as an estimation of the error the value of the remainder R (4) corresponding to the computed approximation, as mentioned in the end of Section 2.
  • If the problem has an (unknown) exact polynomial solution it is easy to see if PLSM finds it since the value of the minimum of the functional in this case is actually zero. In this situation, if we keep increasing the degree (even though there is no point in that), from the computation we obtain that the coefficients of the higher degrees are actually zero.
  • Regarding the choice of the optimization method used for the computation of the minimum of the functional (9), if the solution of the problem is a known polynomial (such as in the case of this application and several of the following ones) we usually employ the critical (stationary) points method, because in this way by using PLSM we can easily find the exact solution. Such problems are relatively simple ones; the expression of the functional (9) is also not very complicated and indeed the solutions can usually be computed even by hand (as in the case of this application) and in general no concerns of conditioning or stability arise.
    However, for a more complicated (real-life) problem, when the solution is not known (or even if the exact solution is known but not polynomial), we would not use the critical points method. In fact, we would not even use an iterative-type method but rather a heuristic algorithm such as Differential Evolution or Simulated Annealing. In our experience with this type of problem even a simple Nelder–Mead type algorithm works well (as was the case for the following Application 7, Application 8 and Application 9).

3.2. Application 2: System of Fractional Volterra Integro-Differential Equations

The second application consists of the equations [17]:
D 1 3 x 1 ( t ) = 0 t ( t + s ) · x 2 ( s ) d s + 5 · t 3 6 + t 2 3 Γ ( 5 3 ) D 2 3 x 2 ( t ) = 0 t ( t s ) · x 1 ( s ) d s t 3 6 t 1 3 Γ ( 4 3 )
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 .
The exact solutions of this problem are x 1 e ( t ) = t , x 2 e ( t ) = t . Again, using the PLSM steps outlined in the previous section, by choosing the polynomials x ˜ 1 ( t ) = c 11 · t + c 10 and x ˜ 2 ( t ) = c 21 · t + c 20 , from the initial conditions it follows that c 10 = 0 and c 20 = 0 , hence x ˜ 1 ( t ) = c 11 · t and x ˜ 2 ( t ) = c 21 · t and the corresponding remainder (8) is:
R c ( t , c 11 , c 21 ) = c 11 2 t 6 36 + 9 c 11 2 t 4 / 3 4 Γ 2 3 2 c 11 c 21 t 10 / 3 Γ 1 3 5 c 11 c 21 t 11 / 3 2 Γ 2 3 c 11 t 6 18 3 c 11 t 4 / 3 Γ 2 3 Γ 5 3 c 11 t 10 / 3 3 Γ 4 3 5 c 11 t 11 / 3 2 Γ 2 3 + 25 c 21 2 t 6 36 + 9 c 21 2 t 2 / 3 Γ 1 3 2 + 25 c 21 t 6 18 + 6 c 21 t 2 / 3 Γ 1 3 Γ 4 3 + c 21 t 10 / 3 Γ 1 3 + 5 c 21 t 11 / 3 3 Γ 5 3 + 13 t 6 18 + t 2 / 3 Γ 4 3 2 + t 4 / 3 Γ 5 3 2 + t 10 / 3 3 Γ 4 3 + 5 t 11 / 3 3 Γ 5 3 .
The corresponding functional (9) is:
J ( c 11 , c 21 ) = c 11 2 1 252 + 27 28 Γ 2 3 2 + c 11 c 21 15 28 Γ 2 3 3 13 Γ 1 3 1 126 + 15 28 9 7 Γ 5 3 Γ 2 3 1 13 Γ 4 3 + c 21 2 25 252 + 27 5 Γ 1 3 2 + c 21 3 13 + 18 5 Γ 4 3 Γ 1 3 + 5 126 c 21 5 + 9 Γ 5 3 + 13 126 + 1 13 Γ 4 3 + 3 5 Γ 4 3 2 + 5 14 Γ 5 3 + 3 7 Γ 5 3 2 .
Again, by computing the stationary (critical, equilibrium) points for the functional, it is easy to show that the minimum corresponds to c 11 0 = 1 , c 21 0 = 1 , thus finding again the exact solution of the problem.
In the following applications, when the exact solutions are also polynomial, we will omit the details of the computations, since they are very similar.
We also remark again that the previous method in [17] (Block-Pulse Functions Method) was only able to find approximate solutions.

3.3. Application 3: System of Fractional Volterra–Fredholm Integro-Differential Equations

The third application consists of the equations [17]:
D 1 2 x 1 ( t ) = 0 1 s 2 t x 1 ( s ) d s + 0 t s + t 2 x 2 ( s ) d s + 8 t 3 / 2 3 π + t 5 3 + t 4 4 t 5 D 1 2 x 2 ( t ) = 0 1 s 2 + t x 1 ( s ) d s + 0 t s t 2 x 2 ( s ) d s 8 t 3 / 2 3 π + t 6 4 t 3 1 5
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 .
The exact solutions of this problem are x 1 e ( t ) = t 2 , x 2 e ( t ) = t 2 . Again, using PLSM we are able to find the exact solutions of the problem, while the previous method in [17] (the Block–Pulse Functions Method) was only able to find approximate solutions.

3.4. Application 4: System of Fractional Volterra–Fredholm Integro-Differential Equations with a Weakly Singular Kernel

The next application consists of the equations [21]:
D 1 5 x 1 ( t ) = 2 0 t ( t s ) x 1 ( s ) + x 2 ( s ) d s + 0 1 s e t x 2 ( s ) d s 1 6 ( ( 3 t + 4 ) t + 6 ) t 2 + 5 ( 10 t + 9 ) t 4 / 5 36 Γ 4 5 + 3 e t 10 D 1 3 x 2 ( t ) = 1 3 0 1 ( s + t ) x 1 ( s ) d s + 0 t x 2 ( s ) t s d s 32 t 7 / 2 35 + 81 t 8 / 3 40 Γ 2 3 + 2 t 3 + 2 t + 1 3
together with the boundary conditions x 1 ( 0 ) + x 1 ( 0 ) = 1 , x 1 ( 1 ) x 1 ( 1 ) = 1 , x 2 ( 0 ) + x 2 ( 0 ) = 1 .
The exact solutions of this problem are x 1 e ( t ) = t 2 + t , x 2 e ( t ) = t 3 1 .
Again, using PLSM we are able to find the exact solutions of the problem, while the previous method in [21] (Müntz–Legendre Wavelet Method) was only able to find approximate solutions.

3.5. Application 5: System of Singular Fractional Volterra–Fredholm Integro-Differential Equations

The next application consists of the equations [10]:
D 2 5 x 1 ( t ) = 0 t x 1 ( s ) t s d s + 2 0 1 s t x 2 ( s ) d s 4 t 3 / 2 3 + t 3 / 5 Γ 8 5 + 2 t 3 D 1 2 x 2 ( t ) = 1 2 0 t x 1 ( s ) ( t s ) 2 / 5 d s + 0 1 ( s + t ) x 2 ( s ) d s + 1 48 25 t 8 / 5 + t 2 2 t π + 1 3
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 .
The exact solutions of this problem are x 1 e ( t ) = t , x 2 e ( t ) = t . Using PLSM we can easily find the exact solutions of the problem, while the previous method in the [10] (Second Chebyshev Wavelets Method) was only able to find approximate solutions.

3.6. Application 6: System of Fractional Volterra–Fredholm Integro-Differential Equations

The next application consists of the equations [9]:
D 3 2 x 1 ( t ) = 0 1 2 t ( x 1 ( s ) x 2 ( s ) ) d s 0 t x 2 ( s ) d s + t 4 4 + t 6 + 4 t π D 5 4 x 2 ( t ) = 0 1 2 t ( x 1 ( s ) + x 2 ( s ) ) d s 0 t x 1 ( s ) d s + t 3 3 + 32 t 7 / 4 7 Γ 3 4 + 7 t 6
together with the boundary conditions:
x 1 ( 0 ) + x 1 ( 0 ) = 0 , x 2 ( 0 ) + x 2 ( 0 ) = 0 , x 1 ( 1 ) + x 1 ( 1 ) = 3 , x 2 ( 1 ) + x 2 ( 1 ) = 4 .
The exact solutions of this problem are x 1 e ( t ) = t 2 , x 2 e ( t ) = t 3 .
Once again, using PLSM we can easily find the exact solutions of the problem, while the previous method in [9] (Chebyshev Wavelets Method) was only able to find approximate solutions.

3.7. Application 7: System of Fractional Volterra Integro-Differential Equations

The next application consists of the equations [1,2,5,6,7,13,16,22]:
D α 1 x 1 ( t ) = x 2 ( t ) 0 t ( x 1 ( s ) + x 2 ( s ) ) d s + t 2 + t + 1 D α 2 x 2 ( t ) = x 1 ( t ) 0 t ( x 1 ( s ) x 2 ( s ) ) d s t 1 , 0 < α 1 , α 2 1
together with the initial conditions x 1 ( 0 ) = 1 , x 2 ( 0 ) = 1 .
The exact solutions of this problem are only known for α 1 = α 2 = 1 , in which case they are x 1 e ( t ) = t + e t , x 2 e ( t ) = t e t .
This problem is one of the most frequently used test-problems for this type of equation: in 2006 in [1] approximations were computed by means of the Adomian Decomposition Method (but unfortunately no estimations of the error were presented); in 2009 in [2], approximations were computed using the Homotopy Analysis Method (but again no values for the errors were given); in 2013 in [5], approximations were computed by employing the Chebyshev Pseudo-Spectral Method; in 2014 in [7], approximations were computed by using a Reproducing Kernel Hilbert Space Method; in 2015 in [13], approximations were computed by using the Triangular Functions Operational Matrix Method; in 2017 in [6], approximations were computed by employing a Chebyshev Spectral Method; in 2018 in [16], approximations were computed using a Hybrid Bernstein Block-Pulse Functions Method; in 2021 in [22], approximations were computed by employing a Finite Integration Method based on Shifted Chebyshev Polynomials.
In the case α 1 = α 2 = 1 , using PLSM we computed polynomial approximate analytical solutions of different degrees. For example, here are the corresponding polynomial approximations of 8, 9, 10 and 11 degrees:
  • Approximate polynomial solutions of the 8th degree: x ˜ 1 ( t ) = 0.000047363323161605174 · t 8 + 0.00013516818873445251 · t 7 + 0.0014824853773400631 · t 6 + 0.0082523219647265658 · t 5 + 0.041708385508058084 · t 4 + 0.16665430024625488 · t 3 + 0.50000193853124696 · t 2 + 1.9999998630996803 · t + 1 , x ˜ 2 ( t ) = 0.000029822979113943663 · t 8 0.00021279731585400621 · t 7 0.0013428622317403854 · t 6 0.0083833080287243346 · t 5 0.041640035768208066 · t 4 0.16667385375660027 · t 3 0.49999911119102772 · t 2 0.00000003911783176476 · t 1 ,
  • Approximate polynomial solutions of the 9th degree: x ˜ 1 ( t ) = 0.0000045805173894289 · t 9 + 0.000020580330135790038 · t 8 + 0.00020356905205933829 · t 7 + 0.0013851707127562687 · t 6 + 0.0083349621639923638 · t 5 + 0.041666241483179853 · t 4 + 0.16666672837372302 · t 3 + 0.49999999572948002 · t 2 + 2.0000000000963291 · t + 1 , x ˜ 2 ( t ) = 0.000004566319241696 · t 9 0.000020644096527373962 · t 8 0.0002034502913137179 · t 7 0.0013852891586628564 · t 6 0.0083348940444554066 · t 5 0.041666264103866799 · t 4 0.16666672427992802 · t 3 0.49999999607833814 · t 2 + 0.000000000086711225 · t 1 ,
  • Approximate polynomial solutions of the 10th degree: x ˜ 1 ( t ) = 0.000000457629447356 · t 10 + 0.000002285273424779 · t 9 + 0.000025457659162681733 · t 8 + 0.00019785674926081092 · t 7 + 0.0013891868588020061 · t 6 + 0.0083332323662652953 · t 5 + 0.041666687563315580 · t 4 + 0.16666666422454891 · t 3 + 0.50000000013734605 · t 2 + 1.9999999999974718 · t + 1 , x ˜ 2 ( t ) = 0.000000456481442412 · t 10 0.000002291004565149 · t 9 0.000025445543954087026 · t 8 0.00019787085523824838 · t 7 0.0013891770075813552 · t 6 0.0083332365767372269 · t 5 0.041666686487184833 · t 4 0.16666666437768615 · t 3 0.50000000012695446 · t 2 + 0.000000000002298681 · t 1 ,
  • Approximate polynomial solutions of the 11th degree: x ˜ 1 ( t ) = 0.0000000415720676463 · t 11 + 0.000000228409244376 · t 10 + 0.00000282966676599 · t 9 + 0.000024729572335723796 · t 8 + 0.00019845840335564334 · t 7 + 0.0013888697769646509 · t 6 + 0.0083333385188320542 · t 5 + 0.041666665792542027 · t 4 + 0.16666666675080019 · t 3 + 0.49999999999607688 · t 2 + 2.0000000000000601 · t + 1 , x ˜ 2 ( t ) = 0.000000041485933467 · t 11 0.00000022888238880 · t 10 0.000002828547732026 · t 9 0.000024731062024094003 · t 8 0.00019845717875317951 · t 7 0.0013888704185612361 · t 6 0.0083333383054740074 · t 5 0.041666665835963698 · t 4 0.16666666674580645 · t 3 0.49999999999635321 · t 2 + 0.0000000000000550614 · t 1 ,
In order to illustrate the accuracy of PLSM, for α 1 = α 2 = 1 , in Table 1 and Table 2 we will compare the absolute errors corresponding to approximations previously computed in [5,6,13,16,22] with the absolute errors corresponding to the approximations of degree 11 computed by using PLSM. Unfortunately, the results in [7] were not presented in the same manner or for the same set of values of t, but the order of the errors corresponding to the most accurate approximation presented there is 10 4 . From the tables it is clear that PLSM yields the most accurate approximation.
In Table 3 and Table 4, we present the absolute errors corresponding to the approximations of degree 8, 9, 10 and 11 computed by using PLSM. The data from the tables clearly illustrates the fast convergence of the method:
We also computed the following pairs of approximate solutions corresponding to fractional values for α 1 and α 2 :
  • For α 1 = α 2 = 0.95 :
    x ˜ 1 0.95 ( t ) = 13.336596654518516 · t 8 + 58.04868042218979 · t 7 104.41832149560244 · t 6 + 100.3317329940824 · t 5 55.58582339818178 · t 4 + 18.264871749227297 · t 3 3.05799038258364 · t 2 + 2.522089765005985 · t + 1 ,
    x ˜ 2 0.95 ( t ) = 1.010091369514156 · t 8 4.138258872192538 · t 7 + 6.924287964060841 · t 6 6.116726577090353 · t 5 + 3.0436948587738555 · t 4 1.1286577662915558 · t 3 0.3206214574055094 · t 2 + + 0.00006408718387997956 · t 1 ,
  • For α 1 = α 2 = 0.9 :
    x ˜ 1 0.9 ( t ) = 31.253952652053513 · t 8 + 135.93820664340987 · t 7 244.40080187060968 · t 6 + 234.7468345224382 · t 5 130.09203064683683 · t 4 + 42.48497831522821 · t 3 7.761606810795855 · t 2 + 3.1511366491250254 · t + 1 ,
    x ˜ 2 0.9 ( t ) = 2.86756718742337 · t 8 11.787046080157845 · t 7 + 19.81925153527951 · t 6 17.603513528739253 · t 5 + 8.9085348315588 · t 4 2.9259778057291115 · t 3 0.017313467543804112 · t 2 + + 0.002291550878261303 · t 1 ,
  • For α 1 = α 2 = 0.85 :
    x ˜ 1 0.85 ( t ) = 54.913870079601516 · t 8 + 238.66680633977725 · t 7 428.8448083902235 · t 6 + 411.7271754534003 · t 5 228.13637416824093 · t 4 + 74.32172448428025 t 3 13.87612351841145 · t 2 + 3.906252271747113 · t + 1 ,
    x ˜ 2 0.85 ( t ) = 5.955767557454868 · t 8 24.518076991649536 · t 7 + 41.320006557593594 · t 6 36.79717777289907 · t 5 + 18.708149140097635 · t 4 5.866845629882316 · t 3 + 0.43943574639818067 · t 2 + + 0.0092595178423443 · t 1 ,
  • For α 1 = α 2 = 0.8 :
    x ˜ 1 0.8 ( t ) = 85.74687740557404 · t 8 + 372.38238988370335 · t 7 668.6868242436764 · t 6 + 641.6798567614443 · t 5 355.4397888374796 · t 4 + 115.60793660707635 t 3 21.72307456006768 · t 2 + 4.809622641243372 · t + 1 ,
    x ˜ 2 0.8 ( t ) = 10.752636697389862 · t 8 44.27164732296013 · t 7 + 74.64117558448292 · t 6 66.48597798671237 · t 5 + 33.783167004112386 · t 4 10.283500255464348 · t 3 + 1.0718433577370883 · t 2 + + 0.025409320786113715 · t 1 ,
  • For α 1 = α 2 = 0.75 :
    x ˜ 1 0.75 ( t ) = 125.52253854224932 · t 8 + 544.6822597729907 · t 7 977.4295185536902 · t 6 + 937.4451862456991 · t 5 519.0568368270845 · t 4 + 168.59767007006937 t 3 31.691922928374943 · t 2 + 5.88682748402819 · t + 1 ,
    x ˜ 2 0.75 ( t ) = 17.817056825410944 · t 8 73.28431838176662 · t 7 + 123.41790237434606 · t 6 109.73394134487035 · t 5 + 55.53207894532952 · t 4 16.47850950328606 · t 3 + 1.8823930527723844 · t 2 + + 0.0580903060452824 · t 1 ,
  • For α 1 = α 2 = 0.7 :
    x ˜ 1 0.7 ( t ) = 176.4443776851996 · t 8 + 765.0144116082502 · t 7 1371.8435559019854 · t 6 + 1314.951935992742 · t 5 727.7246464612392 · t 4 + 236.06781781182352 t 3 44.252744468803 · t 2 + 7.1670094972903735 · t + 1 ,
    x ˜ 2 0.7 ( t ) = 27.73959476453739 · t 8 113.86039914496833 · t 7 + 191.26445050579105 · t 6 169.41875324814768 · t 5 + 85.12493114203431 · t 4 24.618583543171702 · t 3 + 2.832575115266481 · t 2 + + 0.11909168123224027 · t 1 .
The plots of these approximate solutions are presented in Figure 1 and Figure 2 together with the plots corresponding to the exact solutions in the case α 1 = α 2 = 1 .

3.8. Application 8: System of Nonlinear Fractional Integro-Differential Equations

The next application consists of the equations [21]:
D α 1 x 1 ( t ) = 1 2 · x 2 2 ( t ) + 0 t ( ( t s ) · x 2 ( s ) + x 1 ( s ) · x 2 ( s ) ) d s + 1 2 · cosh 2 ( t ) t · sinh ( t ) · cosh ( t ) + 1 D α 2 x 2 ( t ) = 0 1 ( t + s ) · ( x 1 ( s ) x 2 2 ( s ) ) d s + 1 2 · ( 2 · t + 1 ) · cosh 2 ( t ) + sinh ( t ) t · ( cosh ( 1 ) 1 ) 1 e , 0 < α 1 , α 2 1
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 1 .
The problem was solved in 2020 in [21] where approximations were computed by means of the Müntz–Legendre Wavelets Method.
The exact solutions of this problem are only known for α 1 = α 2 = 1 , in which case they are x 1 e ( t ) = sinh ( t ) , x 2 e ( t ) = cosh ( t ) . For this case, in Table 3 we will compare the absolute errors corresponding to the approximations computed in [21] with the absolute errors corresponding to several approximations computed by using PLSM. Since unfortunately in [21] the results were not presented for a given set of values of t, but as plots of the absolute errors, in Table 5 we also present the corresponding maximal absolute errors. Again, the results illustrate the accuracy and the fast convergence of PLSM:
We also computed several pairs of approximate solutions corresponding to fractional values for α 1 and α 2 , whose plots are presented in Figure 3 and Figure 4 together with the plots corresponding to the exact solutions in the case α 1 = α 2 = 1 .

3.9. Application 9: System of Fractional Volterra–Fredholm Integro-Differential Equations

The next application consists of the equations [17]:
D α 1 x 1 ( t ) = 0 1 t · ( t + s ) · x 1 ( s ) d s + 0 t s · ( t s ) · x 2 ( s ) d s t · ( 2 · t + 1 ) π + π · cosh ( π · t ) t · sinh ( t ) + 2 · cosh ( t ) 2 D α 2 x 2 ( t ) = 0 1 s · ( t + s ) · x 1 ( s ) d s + 0 t t · ( t s ) · x 2 ( s ) d s t + 1 π + t · ( t sinh ( t ) ) + cosh ( t ) + 4 π 3 , 0 < α 1 , α 2 1
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 .
The problem was studied in 2019 in [17] where approximations were computed by means of a Block–Pulse Functions Method, but unfortunately no errors corresponding to the approximations were presented.
The exact solutions of this problem are only known for α 1 = α 2 = 1 , in which case they are x 1 e ( t ) = sin ( π · t ) , x 2 e ( t ) = sinh ( t ) . For this case, in Table 6 we will present the maximal absolute errors corresponding to several approximations computed by using PLSM.
We also computed several pairs of approximate solutions corresponding to fractional values for α 1 and α 2 , whose plots are presented in Figure 5 and Figure 6 together with the plots corresponding to the exact solutions in the case α 1 = α 2 = 1 .

3.10. Application 10: System of Singular Fractional Volterra Integro-Differential Equations

The last application consists of the equations [8,16]:
D α 1 x 1 ( t ) = x 3 ( t ) + 0 t s · t · x 1 ( s ) + x 2 ( s ) t s d s 16 · t 5 / 2 15 + 16 · t 7 / 2 15 32 · t 9 / 2 35 t 3 + 2 · t 1 D α 2 x 1 ( t ) = x 1 ( t ) + 0 t x 2 ( s ) + x 3 ( s ) t s 3 d s 27 · t 8 / 3 40 243 · t 11 / 3 440 t 2 + 3 · t D α 3 x 1 ( t ) = x 3 ( t ) + 0 t s 2 · t · x 2 ( s ) + x 1 ( s ) t s 4 d s + 16 · t 7 / 4 21 128 · t 11 / 4 231 t 3 + 3 · t 2 8192 · t 23 / 4 21945 , 0 < α 1 , α 2 1
together with the initial conditions x 1 ( 0 ) = 0 , x 2 ( 0 ) = 0 , x 3 ( 0 ) = 0 .
The problem was studied in 2014 in [8], where approximations were computed by means of a Chebyshev Wavelets Expansion Method and in 2018 in [16] by means of a Block–Pulse Function Method.
The exact solutions of this problem are only known for α 1 = α 2 = α 3 = 1 , in which case they are x 1 e ( t ) = t 2 t , x 2 e ( t ) = t 2 , x 3 e = t 3 . For this case, using PLSM we are able to compute the exact solution.
We also computed several pairs of approximate solutions corresponding to fractional values for α 1 and α 2 , whose plots are presented in Figure 7, Figure 8 and Figure 9.

4. Conclusions

The study presents the Polynomial Least Squares Method as a straightforward, efficient and accurate method to compute approximate analytical solutions for a very general class of systems of fractional nonlinear integro-differential equations.
The main advantages of PLSM are:
  • The simplicity of the method—the computations involved in the use of PLSM are as straightforward as it gets; in fact in the case of a lower degree polynomial the computations sometimes can be performed even by hand, as illustrated by the first application.
  • The accuracy of the method—clearly illustrated by the applications presented, since by using PLSM we were able to compute approximations at least as good (if not better) than the approximations computed by other methods.
  • The simplicity of the approximation—since the approximations are polynomial, they also have the simplest possible form and thus any subsequent computation involving the solution can be performed with ease. While it is true that for some approximation methods which work with polynomial approximations the convergence may be very slow, this is not the case here (see for example Application 7, Application 8 and Application 9, which are representative of the performance of the method).
The study includes an extensive application list, containing most of the usual test problems used for this type of equation and compare our results with previous results obtained by using other well-known methods. For the test problems whose exact solution is a polynomial one, PLSM is able to find the exact solution in a simple manner, while most of the other methods previously used were only able to compute approximate solutions. If the solution is not polynomial, PLSM is able to find approximate solutions, again in a very straightforward way, with errors usually smaller that the ones corresponding to the approximations computed by other methods.
Taking into the account the above considerations, the results of this study recommend PLSM as a very useful tool in the research of systems of fractional nonlinear integro-differential equations.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Momani, S.; Qaralleh, A. An efficient method for solving systems of fractional integro-differential equations. Comput. Math. Appl. 2006, 52, 459–470. [Google Scholar] [CrossRef] [Green Version]
  2. Zurigat, M.; Momani, S.; Alawneh, A. Homotopy Analysis Method for systems of fractional integro-differential equations. Neural Parallel Sci. Comput. 2009, 17, 169–186. [Google Scholar]
  3. Akbar, M.; Nawaz, R.; Ahsan, S.; Nisar, K.S.; Abdel-Aty, A.-H.; Eleuch, H. New approach to approximate the solution for the system of fractional order Volterra integro-differential equations. Results Phys. 2020, 19, 103453. [Google Scholar] [CrossRef]
  4. Saeed, R.K.; Sdeq, H.M. Solving a system of linear Fredholm fractional integro-differential equations using Homotopy Perturbation Method. Aust. J. Basic Appl. Sci. 2010, 4, 633–638. [Google Scholar]
  5. Khader, M.M.; Sweilam, N.H. On the approximate solutions for system of fractional integro-differential equations using Chebyshev pseudo-spectral method. Appl. Math. Model. 2013, 37, 9819–9828. [Google Scholar] [CrossRef]
  6. Zedan, H.A.; Tantawy, S.S.; Sayed, Y.M. New solutions for system of fractional integro-differential equations and Abel’s integral equations by Chebyshev Spectral Method. Math. Probl. Eng. 2017, 2017, 7853839. [Google Scholar] [CrossRef]
  7. Bushnaq, S.; Maayah, B.; Momani, S.; Alsaedi, A. A reproducing kernel Hilbert space method for solving systems of fractional integro-differential equations. Abstr. Appl. Anal. 2014, 2014, 103016. [Google Scholar] [CrossRef] [Green Version]
  8. Heydari, M.H.; Hooshmandasl, M.R.; Mohammadi, F.; Cattani, C. Wavelets method for solving systems of nonlinear singular fractional Volterra integro-differential equations. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 37–48. [Google Scholar] [CrossRef]
  9. Zhou, F.; Xu, X. Numerical solution of fractional Volterra-Fredholm integro-differential equations with mixed boundary conditions via Chebyshev wavelet method. Int. J. Comput. Math. 2018, 96, 436–456. [Google Scholar] [CrossRef]
  10. Bargamadi, E.; Torkzadeh, L.; Nouri, K.; Jajarmi, A. Solving a system of fractional-order Volterra-Fredholm integro-differential equations with weakly singular kernels via the second Chebyshev wavelets method. Fractal Fract. 2021, 5, 70. [Google Scholar] [CrossRef]
  11. Al-Marashi, A.A. Approximate solution of the system of linear fractional integro-differential equations of Volterra using B-Spline method. Am. Rev. Math. Stat. 2015, 3, 39–47. [Google Scholar] [CrossRef] [Green Version]
  12. Khalil, H.; Khan, R.A. Numerical scheme for solution of coupled system of initial value fractional order Fredholm integro-differential equations with smooth solutions. J. Math. Ext. 2015, 9, 39–58. [Google Scholar]
  13. Asgari, M. Numerical solution for solving a system of fractional integro-differential equations. IAENG Int. J. Appl. Math. 2015, 45, 1–7. [Google Scholar]
  14. Deif, S.A.; Grace, S.R. Iterative refinement for a system of linear integro-differential equations of fractional type. J. Comput. Appl. Math. 2016, 294, 138–150. [Google Scholar] [CrossRef]
  15. Deif, S.A.; Grace, S.R. Fast iterative refinement method for mixed systems of integral and fractional integro-differential equations. Comput. Appl. Math. 2018, 37, 2354–2379. [Google Scholar] [CrossRef]
  16. Hesameddini, E.; Shahbazi, M. Hybrid Bernstein Block-Pulse functions for solving system of fractional integro-differential equations. Int. J. Comput. Math. 2018, 95, 2287–2307. [Google Scholar] [CrossRef]
  17. Xie, J.; Yi, M. Numerical research of nonlinear system of fractional Volterra–Fredholm integral–differential equations via Block-Pulse functions and error analysis. J. Comput. Appl. Math. 2019, 345, 159–167. [Google Scholar] [CrossRef]
  18. Wang, J.; Xu, T.-Z.; Wei, Y.-Q.; Xie, J.-Q. Numerical simulation for coupled systems of nonlinear fractional order integro-differential equations via wavelets method. Appl. Math. Comput. 2018, 324, 36–50. [Google Scholar]
  19. Mohammed, O.H.; Malik, A.M. A modified computational algorithm for solving systems of linear integro-differential equations of fractional order. J. King Saud Univ. Sci. 2019, 31, 946–955. [Google Scholar] [CrossRef]
  20. Didgar, M.; Vahidi, A.R.; Biazar, J. An approximate approach for systems of fractional integro- differential equations based on Taylor expansion. Kragujev. J. Math. 2020, 44, 379–392. [Google Scholar] [CrossRef]
  21. Saemi, F.; Ebrahimi, H.; Shafiee, M. An effective scheme for solving system of fractional Volterra–Fredholm integro-differential equations based on the Müntz–Legendre wavelets. J. Comput. Appl. Math. 2020, 374, 112773. [Google Scholar] [CrossRef]
  22. Duangpan, A.; Boonklurb, R.; Juytai, M. Numerical solutions for systems of fractional and classical integro-differential equations via Finite Integration Method based on shifted Chebyshev polynomials. Fractal Fract. 2021, 5, 103. [Google Scholar] [CrossRef]
Figure 1. Approximate solutions x 1 of (17) for several values of α 1 and α 2 .
Figure 1. Approximate solutions x 1 of (17) for several values of α 1 and α 2 .
Fractalfract 05 00198 g001
Figure 2. Approximate solutions x 2 of (17) for several values of α 1 and α 2 .
Figure 2. Approximate solutions x 2 of (17) for several values of α 1 and α 2 .
Fractalfract 05 00198 g002
Figure 3. Approximate solutions x 1 of (18) for several values of α 1 and α 2 .
Figure 3. Approximate solutions x 1 of (18) for several values of α 1 and α 2 .
Fractalfract 05 00198 g003
Figure 4. Approximate solutions x 2 of (18) for several values of α 1 and α 2 .
Figure 4. Approximate solutions x 2 of (18) for several values of α 1 and α 2 .
Fractalfract 05 00198 g004
Figure 5. Approximate solutions x 1 of (19) for several values of α 1 and α 2 .
Figure 5. Approximate solutions x 1 of (19) for several values of α 1 and α 2 .
Fractalfract 05 00198 g005
Figure 6. Approximate solutions x 2 of (19) for several values of α 1 and α 2 .
Figure 6. Approximate solutions x 2 of (19) for several values of α 1 and α 2 .
Fractalfract 05 00198 g006
Figure 7. Approximate solutions x 1 of (20) for several values of α 1 and α 2 .
Figure 7. Approximate solutions x 1 of (20) for several values of α 1 and α 2 .
Fractalfract 05 00198 g007
Figure 8. Approximate solutions x 2 of (20) for several values of α 1 and α 2 .
Figure 8. Approximate solutions x 2 of (20) for several values of α 1 and α 2 .
Fractalfract 05 00198 g008
Figure 9. Approximate solutions x 3 of (20) for several values of α 1 and α 2 .
Figure 9. Approximate solutions x 3 of (20) for several values of α 1 and α 2 .
Fractalfract 05 00198 g009
Table 1. Comparison of absolute errors of x 1 for problem (17) in the case α 1 = α 2 = 1 .
Table 1. Comparison of absolute errors of x 1 for problem (17) in the case α 1 = α 2 = 1 .
t[5][13][6][16][22]11-th deg. PLSM
0.1 1 × 10 5 9 × 10 5 2 × 10 7 5 × 10 11 6 × 10 15 1 × 10 17
0.2 6 × 10 5 1 × 10 4 1 × 10 7 5 × 10 11 - 2 × 10 16
0.3 9 × 10 5 2 × 10 4 8 × 10 8 1 × 10 12 - 2 × 10 16
0.4 7 × 10 5 1 × 10 4 5 × 10 8 9 × 10 11 - 4 × 10 16
0.5 1 × 10 5 7 × 10 5 4 × 10 8 2 × 10 11 1 × 10 15 8 × 10 16
0.6 4 × 10 5 2 × 10 4 1 × 10 8 6 × 10 11 - 1 × 10 17
0.7 7 × 10 5 3 × 10 4 5 × 10 8 1 × 10 11 - 1 × 10 17
0.8 5 × 10 5 4 × 10 4 7 × 10 8 7 × 10 11 - 8 × 10 16
0.9 5 × 10 6 4 × 10 4 9 × 10 8 2 × 10 11 1 × 10 15 1 × 10 17
Table 2. Comparison of absolute errors of x 2 for problem (17) in the case α 1 = α 2 = 1 .
Table 2. Comparison of absolute errors of x 2 for problem (17) in the case α 1 = α 2 = 1 .
t[5][13][6][16][22]11-th deg. PLSM
0.1 1 × 10 5 9 × 10 5 2 × 10 7 4 × 10 11 2 × 10 15 1 × 10 17
0.2 6 × 10 6 1 × 10 4 5 × 10 7 5 × 10 11 - 2 × 10 16
0.3 1 × 10 5 2 × 10 4 8 × 10 7 4 × 10 12 - 1 × 10 17
0.4 5 × 10 6 1 × 10 4 9 × 10 7 8 × 10 11 - 4 × 10 16
0.5 3 × 10 5 7 × 10 5 3 × 10 6 1 × 10 12 2 × 10 15 7 × 10 16
0.6 7 × 10 5 2 × 10 4 7 × 10 6 7 × 10 11 - 4 × 10 16
0.7 7 × 10 5 3 × 10 4 1 × 10 6 2 × 10 11 - 2 × 10 16
0.8 4 × 10 5 4 × 10 4 7 × 10 7 7 × 10 11 - 4 × 10 16
0.9 3 × 10 6 4 × 10 4 3 × 10 7 4 × 10 11 1 × 10 15 1 × 10 17
Table 3. Comparison of PLSM absolute errors of x 1 for problem (17) in the case α 1 = α 2 = 1 .
Table 3. Comparison of PLSM absolute errors of x 1 for problem (17) in the case α 1 = α 2 = 1 .
t8-th deg. PLSM9-th deg. PLSM10-th deg. PLSM11-th deg. PLSM
0.1 3 × 10 9 8 × 10 13 7 × 10 15 1 × 10 17
0.2 3 × 10 9 1 × 10 12 1 × 10 14 2 × 10 16
0.3 4 × 10 9 8 × 10 13 3 × 10 14 2 × 10 16
0.4 4 × 10 9 5 × 10 13 2 × 10 14 4 × 10 16
0.5 3 × 10 9 1 × 10 12 2 × 10 15 8 × 10 16
0.6 2 × 10 9 3 × 10 13 2 × 10 14 1 × 10 17
0.7 2 × 10 9 9 × 10 13 2 × 10 14 1 × 10 17
0.8 2 × 10 9 1 × 10 12 9 × 10 15 8 × 10 16
0.9 1 × 10 9 7 × 10 13 5 × 10 15 1 × 10 17
Table 4. Comparison of PLSM absolute errors of x 2 for problem (17) in the case α 1 = α 2 = 1 .
Table 4. Comparison of PLSM absolute errors of x 2 for problem (17) in the case α 1 = α 2 = 1 .
t8-th deg. PLSM9-th deg. PLSM10-th deg. PLSM11-th deg. PLSM
0.1 2 × 10 12 7 × 10 13 5 × 10 15 1 × 10 17
0.2 4 × 10 10 1 × 10 12 1 × 10 14 2 × 10 16
0.3 1 × 10 9 9 × 10 13 2 × 10 14 1 × 10 17
0.4 9 × 10 10 4 × 10 13 2 × 10 14 4 × 10 16
0.5 2 × 10 10 1 × 10 12 9 × 10 16 7 × 10 16
0.6 6 × 10 10 5 × 10 13 2 × 10 14 4 × 10 16
0.7 2 × 10 9 9 × 10 13 3 × 10 14 2 × 10 16
0.8 2 × 10 9 1 × 10 12 1 × 10 14 4 × 10 16
0.9 7 × 10 10 8 × 10 13 7 × 10 15 1 × 10 17
Table 5. Comparison of the maximal absolute errors of the approximate solutions x 1 (first row) and x 2 (second row) for problem (18) in the case α 1 = α 2 = 1 .
Table 5. Comparison of the maximal absolute errors of the approximate solutions x 1 (first row) and x 2 (second row) for problem (18) in the case α 1 = α 2 = 1 .
[21]PLSM 6-th deg.PLSM 7-th deg.PLSM 8-th deg.
x 1 4 × 10 8 4 × 10 8 1 × 10 9 3 × 10 11
x 2 6 × 10 8 4 × 10 8 2 × 10 9 9 × 10 11
Table 6. Comparison of the maximal absolute errors of the approximate solutions x 1 (first row) and x 2 (second row) for problem (19) in the case α 1 = α 2 = 1 .
Table 6. Comparison of the maximal absolute errors of the approximate solutions x 1 (first row) and x 2 (second row) for problem (19) in the case α 1 = α 2 = 1 .
PLSM 5-th deg.PLSM 6-th deg.PLSM 7-th deg.PLSM 8-th deg.
x 1 8 × 10 4 1 × 10 5 8 × 10 6 1 × 10 9
x 2 1 × 10 6 4 × 10 8 3 × 10 9 3 × 10 11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Căruntu, B. Approximate Analytical Solutions for Systems of Fractional Nonlinear Integro-Differential Equations Using the Polynomial Least Squares Method. Fractal Fract. 2021, 5, 198. https://doi.org/10.3390/fractalfract5040198

AMA Style

Căruntu B. Approximate Analytical Solutions for Systems of Fractional Nonlinear Integro-Differential Equations Using the Polynomial Least Squares Method. Fractal and Fractional. 2021; 5(4):198. https://doi.org/10.3390/fractalfract5040198

Chicago/Turabian Style

Căruntu, Bogdan. 2021. "Approximate Analytical Solutions for Systems of Fractional Nonlinear Integro-Differential Equations Using the Polynomial Least Squares Method" Fractal and Fractional 5, no. 4: 198. https://doi.org/10.3390/fractalfract5040198

Article Metrics

Back to TopTop